CN120856652A - Data stream classification method, device and equipment - Google Patents
Data stream classification method, device and equipmentInfo
- Publication number
- CN120856652A CN120856652A CN202410522182.2A CN202410522182A CN120856652A CN 120856652 A CN120856652 A CN 120856652A CN 202410522182 A CN202410522182 A CN 202410522182A CN 120856652 A CN120856652 A CN 120856652A
- Authority
- CN
- China
- Prior art keywords
- model
- network function
- data flow
- request message
- network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a data stream classification method, a data stream classification device and data stream classification equipment. The method includes the first network function classifying the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream. According to the application, the first model trained in advance is built in the first network function, so that the data flow can be classified based on the first model, interaction among a plurality of network elements during data flow classification is avoided, the data flow classification flow can be simplified, the network load is reduced, and the requirements of 6G intelligent endogenous and service data flow differentiation guarantee are met.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a data stream classification method, apparatus, and device.
Background
In existing network environments, traffic data streams exhibit diverse transmission and information characteristics. In order to efficiently manage network resources, implement differentiated quality of service (Quality of Service, qoS) control, and ensure network security, it becomes critical to accurately identify and classify these data flow characteristics. However, in the current standard, the traffic is identified and classified by the user plane function depending on the flow filtering rule and the packet flow description guided by the SMF, and this method needs to provide more information for classifying a service flow, and more signaling interaction is needed between the session management function (Session Management Function, SMF) and the user plane function (User Plane Function, UPF), which increases the network load.
Disclosure of Invention
The application aims to provide a data flow classification method, a data flow classification device and data flow classification equipment, which solve the problem that the network load is large in the existing flow identification and classification method.
An embodiment of the present application provides a data flow classification method, performed by a first network function, the method including:
the first network function classifies the data stream based on a first model that is a pre-trained artificial intelligence (ARTIFICIAL INTELLIGENCE, AI) model for classifying the data stream.
Optionally, before classifying the data stream based on the first model, the method further comprises:
Obtaining the first model through model training;
And sending the registration information of the first model to a second network function, wherein the second network function is used for managing and/or maintaining the registration information.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the method further comprises:
Sending a model file of the first model to a third network function, wherein the third network function is used for storing the model file;
And receiving the third network function to send a first response message, wherein the first response message carries the storage address of the model file.
Optionally, the training of the model obtains the first model, including at least one of the following:
According to a preset training period, performing model training by using the received data stream to obtain an updated first model;
And according to the received first request message, performing model training by using the collected data stream to obtain the updated first model.
Optionally, before model training, the method further comprises:
Receiving a first request message sent by a fourth network function, wherein the first request message is used for requesting model updating;
collecting a data stream according to the first request message;
The first request message carries at least one of a data requirement, a storage address of a model file of the first model and required environment information.
Optionally, the collecting the data stream according to the first request message includes:
Determining a network function of a data stream for model updating according to the first request message;
Sending a second message to the network function, wherein the second message is used for requesting a data stream for model update;
And receiving a second response message sent by the network function, wherein the second response message carries the data flow.
Optionally, before classifying the data stream based on the first model, the method further comprises:
receiving a second request message sent by a fifth network function, wherein the second request message is used for requesting data flow classification service, and the second request message carries data flows to be classified;
In the case of classifying a data stream based on a first model, the method further comprises:
And sending a third response message to the fifth network function, wherein the third response message carries a result of classifying the data flow.
An embodiment of the present application provides a data flow classification method, performed by a fifth network function, the method including:
the fifth network function obtains the address of the first network function;
The fifth network function sends the data flow to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
Optionally, the acquiring the address of the first network function includes:
Sending a third request message to a fourth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirements and classification attributes;
And receiving a fourth response message sent by the fourth network function, wherein the fourth response message carries an address of the first network function for providing the service.
An embodiment of the present application provides a data flow classification method, performed by a fourth network function, the method including:
the fourth network function receives a third request message sent by a fifth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement and classification attribute;
the fourth network function queries a first network function meeting the requirement corresponding to the service requirement identifier according to the third request message;
The fourth network function sends a fourth response message to the fifth network function, wherein the fourth response message carries an address of the first network function for providing the service;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
Optionally, the querying, according to the third request message, the first network function that meets the requirement corresponding to the service requirement identifier includes:
according to the third request message, a fourth request message is sent to the second network function, wherein the fourth request message carries the service type of the request, and the service type indication is data flow classification;
receiving a fifth response message sent by the second network function, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services;
If the fifth response message carries information of a first network function capable of providing the data flow classification service, determining that the network function corresponding to the information carried by the fifth response message is the first network function meeting the requirement corresponding to the service requirement identifier;
and if the fifth response message carries information of a plurality of first network functions capable of providing the data flow classification service, selecting one first network function meeting the requirement corresponding to the service requirement identifier from the plurality of first network functions.
An embodiment of the present application provides a data flow classification method, performed by a second network function, the method including:
the second network function receives registration information of a first model sent by the first network function;
storing a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the method further comprises:
receiving a fourth request message sent by a fourth network function, wherein the fourth request message carries a requested service type, and the service type indicates data flow classification;
and sending a fifth response message to the fourth network function according to the fourth request message, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services.
An embodiment of the present application provides a network device, which is a first network function, including a memory, a transceiver, and a processor:
The system comprises a memory for storing a computer program, a transceiver for receiving and transmitting data under the control of the processor, and a processor for reading the computer program in the memory and performing the following operations:
The data streams are classified based on a first model, which is a pre-trained artificial intelligence AI model for classifying the data streams.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Obtaining the first model through model training;
And sending the registration information of the first model to a second network function, wherein the second network function is used for managing and/or maintaining the registration information.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Sending a model file of the first model to a third network function, wherein the third network function is used for storing the model file;
And receiving the third network function to send a first response message, wherein the first response message carries the storage address of the model file.
Optionally, the processor is configured to read the computer program in the memory and perform at least one of the following operations:
According to a preset training period, performing model training by using the received data stream to obtain an updated first model;
And according to the received first request message, performing model training by using the collected data stream to obtain the updated first model.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Receiving a first request message sent by a fourth network function, wherein the first request message is used for requesting model updating;
collecting a data stream according to the first request message;
The first request message carries at least one of a data requirement, a storage address of a model file of the first model and required environment information.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Determining a network function of a data stream for model updating according to the first request message;
Sending a second message to the network function, wherein the second message is used for requesting a data stream for model update;
And receiving a second response message sent by the network function, wherein the second response message carries the data flow.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
receiving a second request message sent by a fifth network function, wherein the second request message is used for requesting data flow classification service, and the second request message carries data flows to be classified;
In the case of classifying a data stream based on a first model, the method further comprises:
And sending a third response message to the fifth network function, wherein the third response message carries a result of classifying the data flow.
An embodiment of the present application provides a network device that is a fifth network function, including a memory, a transceiver, and a processor:
The system comprises a memory for storing a computer program, a transceiver for receiving and transmitting data under the control of the processor, and a processor for reading the computer program in the memory and performing the following operations:
acquiring an address of a first network function;
Transmitting a data stream to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Sending a third request message to a fourth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirements and classification attributes;
And receiving a fourth response message sent by the fourth network function, wherein the fourth response message carries an address of the first network function for providing the service.
An embodiment of the present application provides a network device, which is a fourth network function, including a memory, a transceiver, and a processor:
The system comprises a memory for storing a computer program, a transceiver for receiving and transmitting data under the control of the processor, and a processor for reading the computer program in the memory and performing the following operations:
receiving a third request message sent by a fifth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement and classification attribute;
inquiring a first network function meeting the requirement corresponding to the service requirement identifier according to the third request message;
sending a fourth response message to the fifth network function, wherein the fourth response message carries an address of the first network function for providing the service;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
according to the third request message, a fourth request message is sent to the second network function, wherein the fourth request message carries the service type of the request, and the service type indication is data flow classification;
receiving a fifth response message sent by the second network function, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services;
If the fifth response message carries information of a first network function capable of providing the data flow classification service, determining that the network function corresponding to the information carried by the fifth response message is the first network function meeting the requirement corresponding to the service requirement identifier;
and if the fifth response message carries information of a plurality of first network functions capable of providing the data flow classification service, selecting one first network function meeting the requirement corresponding to the service requirement identifier from the plurality of first network functions.
An embodiment of the present application provides a network device, which is a second network function, including a memory, a transceiver, and a processor:
The system comprises a memory for storing a computer program, a transceiver for receiving and transmitting data under the control of the processor, and a processor for reading the computer program in the memory and performing the following operations:
receiving registration information of a first model sent by a first network function;
storing a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
receiving a fourth request message sent by a fourth network function, wherein the fourth request message carries a requested service type, and the service type indicates data flow classification;
and sending a fifth response message to the fourth network function according to the fourth request message, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services.
An embodiment of the present application provides a data flow classification device applied to a first network function, the device including:
And the classification unit is used for classifying the data streams based on a first model, wherein the first model is an artificial intelligence AI model which is trained in advance and used for classifying the data streams.
An embodiment of the present application provides a data flow classification apparatus applied to a fifth network function, the apparatus including:
A first acquiring unit, configured to acquire an address of a first network function;
a first sending unit, configured to send a data stream to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
An embodiment of the present application provides a data flow classification device applied to a fourth network function, the device including:
the first receiving unit is used for receiving a third request message sent by a fifth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement and classification attribute;
A query unit, configured to query, according to the third request message, a first network function that satisfies a requirement corresponding to the service requirement identifier;
a second sending unit, configured to send a fourth response message to the fifth network function, where the fourth response message carries an address of the first network function that is used to provide a service;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
An embodiment of the present application provides a data flow classification device applied to a second network function, the device including:
The second receiving unit is used for receiving the registration information of the first model sent by the first network function;
a storage unit, configured to store a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
An embodiment of the present application provides a processor-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the data stream classification method described above.
Embodiments of the present application provide a computer program product comprising computer instructions which, when executed by a processor, implement the steps of the data stream classification method described above.
The technical scheme of the application has the beneficial effects that:
according to the embodiment of the application, the first model trained in advance is built in the first network function, so that the data flow can be classified based on the first model, interaction among a plurality of network elements during data flow classification is avoided, the data flow classification flow can be simplified, the network load is reduced, and the requirements of 6G intelligent endogenous and business data flow differentiation guarantee are met.
Drawings
FIG. 1 is a flow chart of a data flow classification method according to an embodiment of the application;
FIG. 2 is a flow chart of the intelligent user plane function periodically employing forwarded data to train and update the first model;
FIG. 3 is a schematic flow chart of NWDAF training and updating a first model;
Fig. 4 is a schematic flow diagram of providing a data flow classification service for an access network RAN by an intelligent user plane function;
FIG. 5 is a second flow chart of a data flow classification method according to an embodiment of the application;
FIG. 6 is a third flow chart of a data flow classification method according to an embodiment of the application;
FIG. 7 is a flow chart of a data flow classification method according to an embodiment of the application;
FIG. 8 is a schematic diagram of a data stream classification apparatus according to an embodiment of the present application;
FIG. 9 is a diagram showing a second embodiment of a data stream classification apparatus according to the present application;
FIG. 10 is a third schematic diagram of a data stream classification apparatus according to an embodiment of the application;
FIG. 11 is a diagram showing a fourth embodiment of a data stream classification apparatus according to the present application;
Fig. 12 shows one of the schematic structural diagrams of the network device according to the embodiment of the present application;
FIG. 13 is a second schematic diagram of a network device according to an embodiment of the present application;
FIG. 14 is a third schematic diagram of a network device according to an embodiment of the present application;
fig. 15 shows a fourth schematic structural diagram of a network device according to an embodiment of the present application.
Detailed Description
In order to make the technical problems, technical solutions and advantages to be solved more apparent, the following detailed description will be given with reference to the accompanying drawings and specific embodiments. In the following description, specific details such as specific configurations and components are provided merely to facilitate a thorough understanding of embodiments of the application. It will therefore be apparent to those skilled in the art that various changes and modifications can be made to the embodiments described herein without departing from the scope and spirit of the application. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present application, it should be understood that the sequence numbers of the following processes do not mean the order of execution, and the order of execution of the processes should be determined by the functions and internal logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
In the embodiment of the application, the term "and/or" describes the association relation of the association objects, which means that three relations can exist, for example, A and/or B, and can mean that A exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The term "plurality" in embodiments of the present application means two or more, and other adjectives are similar.
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides a data flow classification method, a device and equipment, which are used for solving the problem of larger network load of the existing flow identification and classification method.
The method and the device are based on the same application, and because the principles of solving the problems by the method and the device are similar, the implementation of the device and the method can be referred to each other, and the repetition is not repeated.
As shown in fig. 1, an embodiment of the present application provides a data flow classification method applied to a first network function, where the method specifically includes the following steps:
step 101, the first network function classifies the data stream based on a first model, which is a pre-trained artificial intelligence AI model for classifying the data stream.
In this embodiment, a pre-trained first model is provided within the first network function, the first model being a model for intelligently classifying data streams. The first model may be pre-trained for the first network function. The data stream may be a traffic data stream, i.e. the first model may be used to classify traffic data streams.
Alternatively, the first network function may be an intelligent user plane function or a network data analysis function. Taking the first Network Function as an example, the first Network Function is an intelligent user plane Function, and besides the functions of the user plane Function (User Plane Function, UPF), the first Network Function also has a data flow classification capability because the embodiment of the application considers that the Network element is intelligent, that is, the first Network Function may be an intelligent user plane Function with a data flow classification capability, taking the first Network Function as an example, the first Network Function is a Network data analysis Function, and the first Network Function also has a data flow classification capability besides the functions of the Network DATA ANALYTICS Function, NWDAF, that is, the first Network Function may be a Network data analysis Function with a data flow classification capability. It should be noted that the intelligent user plane function or the network data analysis function is merely an example of the first network function, and the name of the first network function is not limited herein.
Optionally, the first network function may also be other network elements with data traffic classification requirements. By arranging the first model in the network element, the uplink and/or downlink data streams are classified and the data stream types can be marked, so that the network element can map different QoS guarantee strategies according to the classification labels of the data streams, and the base station can reasonably allocate resources and/or control priority according to the classification labels.
According to the embodiment of the application, the first model trained in advance is built in the first network function, so that the data flow can be classified based on the first model, interaction among a plurality of network elements during data flow classification is avoided, the data flow classification flow can be simplified, the network load is reduced, and the requirements of 6G intelligent endogenous and service data flow differentiation guarantee are met.
As an alternative embodiment, before classifying the data stream based on the first model, the method further comprises:
Obtaining the first model through model training;
And sending the registration information of the first model to a second network function, wherein the second network function is used for managing and/or maintaining the registration information.
In this embodiment, the second network function may be a network repository function. The first network model also has the capability of maintaining the first model, and the first network function can acquire the first model through training, and can also train the existing first model to realize model updating.
The second network function is responsible for maintaining AI models and/or AI services registered in the network, providing discovery of AI models and/or services, in addition to the capability of the network repository function (Network Repository Function, NRF). In the case that the first network function trains to obtain the first model, the first model also needs to be registered to the second network function, and registration information is provided for the second network function.
Optionally, the registration information includes at least one of:
1) The model identification can be the Identification (ID) of the first model or the ID of the AI service corresponding to the first model;
2) The model type can be the type of the first model or the type of the AI service corresponding to the first model, such as data flow classification.
3) The algorithm can be an algorithm used by the first model for classifying the data flow and/or an algorithm used by the AI service corresponding to the first model.
4) The storage address may be a storage address of a model file of the first model, the model file comprising relevant data of the first model. Optionally, a service producer, i.e. a producer of the AI service corresponding to the first model, may also be included.
5) Application scenario, for example, the application scenario of the first model is the data flow classification of XX service.
6) And the performance index is the performance index of the first model and/or the AI service corresponding to the first model.
7) Deployment requirements, such as parameter requirements, environmental requirements, etc. of deploying the first model within a first network function.
8) The computing power requirement comprises computing power requirement for classifying the data flow by using the first model.
9) Version information, such as an initial first model, a first updated first model, an nth updated first model, etc.
10 Data format for model training and/or model reasoning;
11 Data dimension for model training and/or model reasoning.
In this embodiment, after the first network function obtains the first model, the first network function may register in the second network function according to model registration template information. The first network function for intelligent classification of data streams using the first model may be implemented using a residual network (ResNet), convolutional neural network (Convolutional Neural Networks, CNN), long Short-Term Memory (LSTM), and other deep learning models.
The data streams are classified, for example, by one or more classification schemes such as classification by traffic class, classification by importance, classification by application generating the data, etc. For example, the data stream can be classified into images, videos, audios, point clouds and the like according to the service types, the video semantic data can be further differentiated into key semantic streams and background semantic streams according to importance, and the data stream can be classified according to the application for generating the data.
As an alternative embodiment, the method further comprises:
Sending a model file of the first model to a third network function, wherein the third network function is used for storing the model file;
And receiving the third network function to send a first response message, wherein the first response message carries the storage address of the model file.
In this embodiment, the third network function may be an analysis data store function (ANALYTICS DATA Repository Function, ADRF) that may store model files for the first model. And under the condition that the first network function performs model training to obtain the first model (can be initial model training or model updating), storing a model file of the first model into the third network function, and sending a storage response message of the model file to the first network function by the third network function, wherein the response message comprises a storage address of the model file.
As an alternative embodiment, the training by the model obtains the first model, including at least one of the following:
(1) According to a preset training period, performing model training by using the received data stream to obtain an updated first model;
In this embodiment, the preset training period may be preconfigured, predefined, or agreed by the network side device. The received data stream may be a data stream that the first network function needs to forward. Optionally, the first network function may also perform model training using the forwarded data to obtain an initial first model, and optionally, the first network function may also perform model training using the forwarded data according to the preset training period and update the first model. In this embodiment, the first network function autonomously performs model training.
(2) And according to the received first request message, performing model training by using the collected data stream to obtain the updated first model.
In this embodiment, the first network function may perform model training under a request of another network element, that is, the first network function is triggered by the other network element to perform model training, and in this case, the first network function needs to perform data collection, and perform model training using the collected data to obtain the first model.
Optionally, before model training, the method further comprises:
Receiving a first request message sent by a fourth network function, wherein the first request message is used for requesting model updating;
collecting a data stream according to the first request message;
The first request message carries at least one of a data requirement, a storage address of a model file of the first model and required environment information.
In this embodiment, the fourth network function may be another network element than the first network function, such as a model selection function, requesting a model update. Optionally, the fourth network function may query the registered model in the network from the NRF, and match the registered model according to the requirement template input by other network elements and related information of the registered model stored in the second network function and/or the third network function, and determine the AI type and algorithm according to the requirement. When the first model performance is poor or the model updating requirement exists, the fourth network function triggers the model updating, sends a request message to the first network function, and after receiving the request message, the first network function performs data collection and uses the collected data to update the model.
The request message sent by the fourth network function may carry information contents such as registration information, data requirement, data storage address, etc. of the first model. And the first network function collects data according to the data requirements, the data storage addresses and other information.
Optionally, the collecting the data stream according to the first request message includes:
Determining a network function of a data stream for model updating according to the first request message;
Sending a second message to the network function, wherein the second message is used for requesting a data stream for model update;
And receiving a second response message sent by the network function, wherein the second response message carries the data flow.
In this embodiment, after receiving the request message sent by the fourth network function, the first network function determines a data source for performing model training based on information carried in the request message, and initiates data collection to a corresponding network function, where the network function sends a response message to the first network function, where the response message carries service flow data. Optionally, the first network function may use a data collection coordination function (Data Collection Coordination Function, DCCF) for data collection.
As an alternative embodiment, before classifying the data stream based on the first model, the method further comprises:
receiving a second request message sent by a fifth network function, wherein the second request message is used for requesting data flow classification service, and the second request message carries data flows to be classified;
In the case of classifying a data stream based on a first model, the method further comprises:
And sending a third response message to the fifth network function, wherein the third response message carries a result of classifying the data flow.
In this embodiment, the fifth network function may be a network function having a data flow classification service requirement. The fifth network function may also be a radio access network (Radio Access Network, RAN). The fifth network function initiates a data flow classification service discovery request, and the first network function provides traffic classification service for the fifth network function. The fifth network function request traffic classification service may classify non-very low latency traffic.
The implementation procedure of autonomous model update by the first network function is illustrated below.
As shown in fig. 2, taking the example that the first network function is an intelligent user plane function, the intelligent user plane function maintains and/or updates the first model, and the process that the intelligent user plane function regularly trains and updates the first model by using the forwarded data includes:
step 21, the intelligent user plane function collects the forwarded data according to the data format, data dimension and other information required by model training in the registration information of the first model, and uses the collected data to train the model;
step 22, the intelligent user plane function stores the model file of the newly trained first model into a third network function (such as ADRF);
when the intelligent user plane function is deployed in a service mode, the data may not be forwarded through the SMF, and the SMF in fig. 2 may be omitted, and at this time, the intelligent user plane function and ADRF may directly interact.
Step 23, the third network function (such as ADRF) sends a model storage response message to the intelligent user plane function, wherein the response message comprises the storage address of the model file;
step 24, the intelligent user plane function sends a model update message to a second network function (such as a network storage function), wherein the message carries updated registration information of the first model, such as information of a model file storage address, model performance, model version and the like;
Step 25, the second network function (such as the network repository function) sends a model update response message to the intelligent user plane function.
In the embodiment, the intelligent user plane function has model training and reasoning capability, and can regularly use the forwarded data to perform model training and update the first model, so that the problem that the traditional service flow classification method lacks flexibility and generalization capability is effectively solved, the intelligent capability of the network function can be improved, and a solid foundation is provided for intelligent development of the 6G network.
The implementation procedure of the first network function for performing model update based on the request of other network elements is illustrated below.
As shown in fig. 3, taking the example that the first network function is a network data analysis function NWDAF, the fourth network function is a model selection function, the second network function is a network repository function, the third network function is ADRF, the implementation process of training NWDAF and updating the first model includes:
Step 31, when there is a poor performance of the first model or a need for updating the model, the model selecting function sends an update request of the first model to the analysis logic function AnLF, where the update request includes registration information (such as data requirement, model storage address, and information of depending environment) of the model;
Steps 32, anLF determine the source of the data flow classification data for model updating, and initiate a data collection request to the corresponding Network Function entity (NFs). Data collection is optionally performed using DCCF.
Steps 33, NFs send a data collection response to AnLF, including the traffic flow data;
Steps 34, anLF send a model training request to model training logic function MTLF, including registration information for the model (e.g., model storage address, environment-dependent information, etc.);
35, MTLF, acquiring a model file and/or an algorithm file of the first model from ADRF according to the model file and/or the algorithm storage address;
36, MTLF, performing model training, and storing a model file of the updated first model obtained by training in ADRF;
step 37, ADRF sends a model file storage response to MTLF, including the storage address of the model file;
Step 38, MTLF sends a model update message to the network repository function, updates model registration information, where the update message carries updated registration information of the first model (such as information of a model file storage address, model performance, model version, etc.);
step 39, the network repository function sends a model update response message to MTLF;
steps 310, MTLF send an update response message for the first model to the model selection function.
In this embodiment, the intelligent user plane function may perform model training and update the first model based on the request of the model selection function. When the first model performance is poor or update requirements exist, the model selection function requests the intelligent user plane function to update the model, so that the problem that the traditional service flow classification method lacks flexibility and generalization capability is effectively solved, the intelligent capability of the network function can be improved, and a solid foundation is provided for intelligent development of the 6G network.
The first network function may also provide data classification services for other network functions.
The implementation procedure of the first network function for providing the data flow classification service for other network elements is illustrated below.
As shown in fig. 4, taking an example that the first network function is an intelligent user plane function, the fourth network function is a model selection function, the second network function is a network repository function, and the fifth network function is a RAN, the intelligent user plane function provides a data flow classification service for the RAN, including:
In step 41, the RAN sends a service data flow classification service discovery request to an access and mobility management function (ACCESS AND Mobility Management Function, AMF), where the request message carries information such as service requirement id= "data flow classification", performance requirement and classification attribute. The classification attributes are used to distinguish between classifications by application, by business, or by importance.
Step 42, AMF initiates a service data stream classification service discovery request message to the model selection function;
step 43, the model selecting function queries the network repository function for AI services registered in the network, i.e. sends a query request of the AI services to the network repository function and indicates that the service type is data flow classification;
step 44, the network repository function sends an AI service query response to the model selection function, carrying one or more matching network functions, such as intelligent user plane functions, capable of providing data flow classification services;
the model selection function selects one or more network functions that are best suited for providing the data flow classification service for the RAN, e.g., selecting a network function that is best suited for providing the traffic classification service for the classification attribute specified by the service consumer, etc., based on the received traffic data flow classification service discovery request message and the service query response.
Step 45, the model selection function sends a data flow classification service discovery response to the AMF, wherein the data flow classification service discovery response comprises a service provider, namely an intelligent user plane function;
Alternatively, if the model selection function selects multiple network functions, the AMF may select one intelligent user plane function from which to service the RAN nearby.
Step 46, AMF sends data flow classified service discovery response to RAN, packet providing service includes intelligent user plane function address;
Step 47, the RAN sends a data stream classification request to the intelligent user plane function, wherein the data stream classification request carries data to be classified;
Step 48, the intelligent user plane function classifies the data using the maintained first model and responds to the data flow classification result, including one or a set of class labels, to the RAN.
In this embodiment, the RAN initiates the traffic classification service discovery request, and the intelligent user plane function provides the traffic classification service for the RAN.
According to the embodiment of the application, the first network function is internally provided with the first model trained in advance, so that data flow classification can be independently carried out, the model can be updated, and data flow classification service can be externally provided. The first network function registers the self-trained and updated first model with a second network model that can maintain various types of AI models and AI services registered with the network. The first network function can also directly provide flow classification service for the RAN nearby by utilizing the first model built-in by the first network function, so that interaction among a plurality of network elements during data flow classification is avoided, the data flow classification flow is simplified, the network load is reduced, and the requirements of 6G intelligent endogenous and service data flow differentiation guarantee are met.
As shown in fig. 5, an embodiment of the present application further provides a data flow classification method applied to a fifth network function, where the method includes:
step 501, a fifth network function obtains an address of a first network function;
Step 502, the fifth network function sends the data stream to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
In this embodiment, a pre-trained first model is provided within the first network function, the first model being a model for intelligently classifying data streams. The first model may be pre-trained for the first network function. The data stream may be a traffic data stream, i.e. the first model may be used to classify traffic data streams. Alternatively, the first network function may be an intelligent user plane function or a network data analysis function. In addition to the functions of the UPF, the network element intelligence is considered in the embodiment of the present application, so that the first network function also has a data flow classification capability, that is, the first network function may be an intelligent user plane function with a data flow classification capability.
The fifth network function may be a network function having a data flow classification service requirement. The fifth network function may also be a RAN. The fifth network function initiates a data flow classification service discovery request, and the first network function provides traffic classification service for the fifth network function. The fifth network function request traffic classification service may classify non-very low latency traffic.
In this embodiment, the fifth network function initiates the traffic classification service discovery request, and the first network function provides the traffic classification service for the fifth network function.
Optionally, the acquiring the address of the first network function includes:
Sending a third request message to a fourth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirements and classification attributes;
And receiving a fourth response message sent by the fourth network function, wherein the fourth response message carries an address of the first network function for providing the service.
In this embodiment, the fourth network function may be a model selection function, and the fifth network function may initiate a traffic data flow classification service discovery request (i.e. the third request message) to the model selection function, by which the first network function capable of providing the data flow classification service is determined. Specifically, the fifth network function sends a service data flow classification service discovery request to the model selection function, where the request message carries information such as service requirement id= "data flow classification", performance requirement, classification attribute, and the like. The classification attributes are used to distinguish data classification by application, by service, or by importance.
Optionally, the fifth network function may forward the service data flow classification service discovery request through an AMF, for example, the fifth network function is RAN, the RAN sends the service data flow classification service discovery request to the AMF, a request message carries information such as service requirement id= "data flow classification", performance requirement and classification attribute, and the AMF initiates the service data flow classification service discovery request to the model selection function.
After receiving the service data stream classified service discovery request, the fourth network function queries the first network function meeting the requirement corresponding to the service requirement ID and feeds back the first network function to the fifth network function. And the fifth network function initiates a data flow classification request to the first network function fed back by the fourth network function, thereby realizing data flow classification. The method for selecting the first network function by the fourth network function is shown in fig. 4, and will not be described herein.
In the embodiment of the application, the fifth network function realizes data flow classification through the first network function, and compared with the RAN embedded flow classification module, the embodiment of the application can avoid the additional overhead of storage and calculation.
As shown in fig. 6, an embodiment of the present application further provides a data flow classification method applied to a fourth network function, where the method includes:
Step 601, the fourth network function receives a third request message sent by the fifth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement and classification attribute;
step 602, the fourth network function queries, according to the third request message, a first network function that meets the requirement corresponding to the service requirement identifier;
step 603, the fourth network function sends a fourth response message to the fifth network function, where the fourth response message carries an address of the first network function for providing services;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
In this embodiment, the fourth network function is, for example, a model selection function. The fifth network function may be a network function that needs to provide a data flow classification service. The fifth network function may also be a RAN. A pre-trained first model is arranged in the first network function, and the first model is used for intelligently classifying the data flow. The first model may be pre-trained for the first network function.
The fifth network function sends a third request message to the fourth network function requesting a data classification service. The fourth network function queries a first network function capable of providing data flow classification service based on the requirement carried by the third request message, and feeds back the queried information of the first network function to the fifth network function so that the fifth network function performs data flow classification through the first network function.
As an optional embodiment, the querying, according to the third request message, the first network function that satisfies the requirement corresponding to the service requirement identifier includes:
according to the third request message, a fourth request message is sent to the second network function, wherein the fourth request message carries the service type of the request, and the service type indication is data flow classification;
receiving a fifth response message sent by the second network function, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services;
If the fifth response message carries information of a first network function capable of providing the data flow classification service, determining that the network function corresponding to the information carried by the fifth response message is the first network function meeting the requirement corresponding to the service requirement identifier;
and if the fifth response message carries information of a plurality of first network functions capable of providing the data flow classification service, selecting one first network function meeting the requirement corresponding to the service requirement identifier from the plurality of first network functions.
In this embodiment, the second network function may be a network repository function. The second network function is responsible for maintaining AI models and/or AI services registered in the network, providing discovery of AI models and/or services, in addition to the capability of the network repository function NRF. In the case that the first network function trains to obtain the first model, the first model also needs to be registered to the second network function, and registration information is provided for the second network function.
And after receiving the request message of the fifth network function, the fourth network function initiates a service data flow classification service discovery request message (namely the fourth request message) to the second network function and indicates that the requested service type is data flow classification, and the second network function feeds back the information of the first network function capable of providing the data flow classification service to the fourth network function.
If the second network function feeds back a first network function capable of providing the data flow classification service, it can be determined that the first network function is a first network function capable of meeting the requirement corresponding to the service requirement identifier, and the first network function can provide the data flow classification service for a fifth network function.
If the second network function feeds back a plurality of first network functions capable of providing data flow classification services, the fourth network function can select one first network function from the plurality of first network functions. For example, the fourth network function selects one or more network functions that are most suitable for providing the data flow classification service for the fifth network function according to the received service data flow classification service discovery request message and the service query response, for example, selects one network function that is most suitable for providing the traffic classification service for the classification attribute specified by the service consumer and the like.
The fourth network function sends a fourth response message to the fifth network function, the fourth response message carrying the selected first network function, the first network function being capable of providing data flow classification services for the fifth network function.
Optionally, the fourth network function may also request a model update. For example, the fourth network function may query the registered model in the network from the NRF, and perform matching according to the requirement template input by other network elements and related information of the model stored in the second network function and/or the third network function, and determine the AI type and algorithm according to the requirement. When the first model performance is poor or the model updating requirement exists, the fourth network function triggers the model updating and sends a first request message to the first network function, wherein the first request message carries at least one of data requirement, storage address of a model file of the first model and required environment information. And after receiving the request message, the first network function performs data collection and updates a model by using the collected data.
According to the embodiment of the application, the fourth network function can query the first network function capable of providing the data flow classification service based on the request of the fifth network function and feed back the first network function to the fifth network function, so that the fifth network function realizes data flow classification through the first network function.
As shown in fig. 7, an embodiment of the present application further provides a data flow classification method applied to a second network function, where the method includes:
step 701, the second network function receives registration information of a first model sent by the first network function;
step 702, storing a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
In this embodiment, the second network function may be a network repository function. The first network function can classify the data flow based on a first model, the first network model also has the capability of maintaining the first model, and the first network function can acquire the first model through training, and can train the existing first model to realize model updating.
The second network function is responsible for maintaining AI models and/or AI services registered in the network, providing discovery of AI models and/or services, in addition to the capability of the network repository function NRF. Under the condition that the first network function is trained to obtain the first model, the first model is required to be registered to a second network function, registration information is provided for the second network function, and the second network function stores the corresponding relation between the registration information of the first model and the first network function, so that specific information of the first model built in each first network function can be determined.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
In this embodiment, after the first network function obtains the first model, the first network function may register in the second network function according to model registration template information. The first network function for intelligent classification of data streams using the first model may be implemented using residual network ResNet, convolutional neural network CNN, long-term memory network LSTM, and other deep learning models.
As an alternative embodiment, the method further comprises:
receiving a fourth request message sent by a fourth network function, wherein the fourth request message carries a requested service type, and the service type indicates data flow classification;
and sending a fifth response message to the fourth network function according to the fourth request message, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services.
In this embodiment, the fifth network function requests the data classification service from the fourth network function. The fourth network function queries the second network function for a first network function capable of providing data flow classification services based on the requirements carried by the request message. Specifically, after receiving the request message of the fifth network function, the fourth network function initiates a service discovery request message of the service data flow classification service (namely, the fourth request message) to the second network function, and indicates that the type of the requested service is data flow classification, and the second network function feeds back the information of the first network function capable of providing the data flow classification service to the fourth network function.
In the embodiment of the application, the second network function is responsible for maintaining various AI models and/or AI services registered in the network and providing the discovery of the AI models and/or AI services besides the capability of the original Network Repository Function (NRF).
The foregoing embodiments are described with respect to the data stream classification method of the present application, and the following embodiments will further describe the corresponding apparatus with reference to the accompanying drawings.
Specifically, as shown in fig. 8, an embodiment of the present application provides a data flow classification device 800, applied to a first network function, including:
The classification unit 810 is configured to classify the data stream based on a first model, which is a pre-trained artificial intelligence AI model for classifying the data stream.
Optionally, the apparatus further includes:
the model training unit is used for obtaining the first model through model training;
and the third sending unit is used for sending the registration information of the first model to a second network function, and the second network function is used for managing and/or maintaining the registration information.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the apparatus further includes:
a fourth sending unit, configured to send a model file of the first model to a third network function, where the third network function is configured to store the model file;
And the third receiving unit is used for receiving the first response message sent by the third network function, wherein the first response message carries the storage address of the model file.
Optionally, the model training unit is specifically configured to perform at least one of the following:
According to a preset training period, performing model training by using the received data stream to obtain an updated first model;
And according to the received first request message, performing model training by using the collected data stream to obtain the updated first model.
Optionally, the apparatus further includes:
A fourth receiving unit, configured to receive a first request message sent by a fourth network function, where the first request message is used to request model update;
a data collection unit for collecting a data stream according to the first request message;
The first request message carries at least one of a data requirement, a storage address of a model file of the first model and required environment information.
Optionally, the data collection unit is specifically configured to:
Determining a network function of a data stream for model updating according to the first request message;
Sending a second message to the network function, wherein the second message is used for requesting a data stream for model update;
And receiving a second response message sent by the network function, wherein the second response message carries the data flow.
Optionally, the apparatus further includes:
a fifth receiving unit, configured to receive a second request message sent by a fifth network function, where the second request message is used to request a data stream classification service, and the second request message carries a data stream to be classified;
And a fifth sending unit, configured to send a third response message to the fifth network function, where the third response message carries a result of classifying the data flow.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the first network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
Specifically, as shown in fig. 9, an embodiment of the present application provides a data flow classification device 900, which is applied to a fifth network function, and includes:
A first obtaining unit 910, configured to obtain an address of a first network function;
a first sending unit 920, configured to send a data stream to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
Optionally, the first obtaining unit is specifically configured to:
Sending a third request message to a fourth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirements and classification attributes;
And receiving a fourth response message sent by the fourth network function, wherein the fourth response message carries an address of the first network function for providing the service.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the fifth network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
Specifically, as shown in fig. 10, an embodiment of the present application provides a data flow classification device 1000, applied to a fourth network function, including:
A first receiving unit 1010, configured to receive a third request message sent by a fifth network function, where the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement, and classification attribute;
a querying unit 1020, configured to query, according to the third request message, a first network function that satisfies a requirement corresponding to the service requirement identifier;
a second sending unit 1030 configured to send a fourth response message to the fifth network function, where the fourth response message carries an address of the first network function for providing a service;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
Optionally, the query unit is specifically configured to:
according to the third request message, a fourth request message is sent to the second network function, wherein the fourth request message carries the service type of the request, and the service type indication is data flow classification;
receiving a fifth response message sent by the second network function, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services;
If the fifth response message carries information of a first network function capable of providing the data flow classification service, determining that the network function corresponding to the information carried by the fifth response message is the first network function meeting the requirement corresponding to the service requirement identifier;
and if the fifth response message carries information of a plurality of first network functions capable of providing the data flow classification service, selecting one first network function meeting the requirement corresponding to the service requirement identifier from the plurality of first network functions.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the fourth network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
Specifically, as shown in fig. 11, an embodiment of the present application provides a data flow classification device 1100, applied to a second network function, including:
A second receiving unit 1110, configured to receive registration information of the first model sent by the first network function;
A storage unit 1120, configured to store a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the apparatus further includes:
a sixth receiving unit, configured to receive a fourth request message sent by a fourth network function, where the fourth request message carries a requested service type, and the service type indicates data flow classification;
A sixth sending unit, configured to send a fifth response message to the fourth network function according to the fourth request message, where the fifth response message carries information of one or more first network functions capable of providing a data flow classification service.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the second network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice. In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. The storage medium includes a U disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or other various media capable of storing program codes.
As shown in fig. 12, an embodiment of the present application further provides a network device, which is a first network function, and includes a memory 1220, a transceiver 1200, and a processor 1210, where the memory 1220 is used for storing a computer program, the transceiver 1200 is used for receiving and transmitting data under the control of the processor 1210, and the processor 1210 is used for reading the computer program in the memory and performing the following operations:
The data streams are classified based on a first model, which is a pre-trained artificial intelligence AI model for classifying the data streams.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Obtaining the first model through model training;
And sending the registration information of the first model to a second network function, wherein the second network function is used for managing and/or maintaining the registration information.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Sending a model file of the first model to a third network function, wherein the third network function is used for storing the model file;
And receiving the third network function to send a first response message, wherein the first response message carries the storage address of the model file.
Optionally, the processor is configured to read the computer program in the memory and perform at least one of the following operations:
According to a preset training period, performing model training by using the received data stream to obtain an updated first model;
And according to the received first request message, performing model training by using the collected data stream to obtain the updated first model.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Receiving a first request message sent by a fourth network function, wherein the first request message is used for requesting model updating;
collecting a data stream according to the first request message;
The first request message carries at least one of a data requirement, a storage address of a model file of the first model and required environment information.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Determining a network function of a data stream for model updating according to the first request message;
Sending a second message to the network function, wherein the second message is used for requesting a data stream for model update;
And receiving a second response message sent by the network function, wherein the second response message carries the data flow.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
receiving a second request message sent by a fifth network function, wherein the second request message is used for requesting data flow classification service, and the second request message carries data flows to be classified;
And sending a third response message to the fifth network function, wherein the third response message carries a result of classifying the data flow.
Wherein in fig. 12, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 1210 and various circuits of memory represented by memory 1220, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 1200 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1210 is responsible for managing the bus architecture and general processing, and the memory 1220 may store data used by the processor 1210 in performing operations.
Processor 1210 may be a Central Processing Unit (CPU), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), or complex Programmable logic device (Complex Programmable Logic Device, CPLD), and may also employ a multi-core architecture.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the first network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
As shown in fig. 13, an embodiment of the present application further provides a network device, which is a fifth network function, and includes a memory 1320, a transceiver 1300, and a processor 1310, where the memory 1320 is used to store a computer program, the transceiver 1300 is used to receive and send data under the control of the processor 1310, and the processor 1310 is used to read the computer program in the memory and perform the following operations:
acquiring an address of a first network function;
Transmitting a data stream to be classified to the first network function according to the address;
Wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained artificial intelligence AI model for classifying the data stream.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
Sending a third request message to a fourth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirements and classification attributes;
And receiving a fourth response message sent by the fourth network function, wherein the fourth response message carries an address of the first network function for providing the service.
Where in FIG. 13, a bus architecture may comprise any number of interconnected buses and bridges, with various circuits of the one or more processors, specifically represented by processor 1310, and the memory, represented by memory 1320, being linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 1300 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1310 is responsible for managing the bus architecture and general processing, and the memory 1320 may store data used by the processor 1310 in performing operations.
The processor 1310 may be a Central Processing Unit (CPU), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), or complex Programmable logic device (Complex Programmable Logic Device, CPLD), and may also employ a multi-core architecture.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the fifth network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
As shown in fig. 14, an embodiment of the present application further provides a network device, which is a fourth network function, and includes a memory 1420, a transceiver 1400, and a processor 1410, wherein the memory 1420 is used for storing a computer program, the transceiver 1400 is used for receiving and transmitting data under the control of the processor 1410, and the processor 1410 is used for reading the computer program in the memory and performing the following operations:
receiving a third request message sent by a fifth network function, wherein the third request message carries a service requirement identifier, and the service requirement identifier indicates at least one of data flow classification, performance requirement and classification attribute;
inquiring a first network function meeting the requirement corresponding to the service requirement identifier according to the third request message;
sending a fourth response message to the fifth network function, wherein the fourth response message carries an address of the first network function for providing the service;
wherein the first network function is configured to classify the data stream based on a first model, the first model being a pre-trained AI model for classifying the data stream.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
according to the third request message, a fourth request message is sent to the second network function, wherein the fourth request message carries the service type of the request, and the service type indication is data flow classification;
receiving a fifth response message sent by the second network function, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services;
If the fifth response message carries information of a first network function capable of providing the data flow classification service, determining that the network function corresponding to the information carried by the fifth response message is the first network function meeting the requirement corresponding to the service requirement identifier;
and if the fifth response message carries information of a plurality of first network functions capable of providing the data flow classification service, selecting one first network function meeting the requirement corresponding to the service requirement identifier from the plurality of first network functions.
Where in FIG. 14, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by the processor 1410 and various circuits of the memory represented by the memory 1420, are linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 1400 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1410 is responsible for managing the bus architecture and general processing, and the memory 1420 may store data used by the processor 1410 in performing operations.
The processor 1410 may be a Central Processing Unit (CPU), an Application SPECIFIC INTEGRATED Circuit (ASIC), a Field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), or a complex Programmable logic device (Complex Programmable Logic Device, CPLD), and may also employ a multi-core architecture.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the fourth network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
As shown in fig. 15, an embodiment of the present application further provides a network device, which is a second network function, including a memory 1520, a transceiver 1500, and a processor 1510, wherein the memory 1520 is used for storing a computer program, the transceiver 1500 is used for receiving and transmitting data under the control of the processor 1510, and the processor 1510 is used for reading the computer program in the memory and performing the following operations:
receiving registration information of a first model sent by a first network function;
storing a correspondence between registration information of the first model and the first network function;
Wherein the first model is a pre-trained AI model for classifying a data stream.
Optionally, the registration information includes at least one of:
Identifying a model;
model type;
An algorithm;
Storing the address;
An application scene;
Performance index;
deployment requirements;
calculating the force demand;
Version information;
data format for model training and/or model reasoning;
data dimension of model training and/or model reasoning.
Optionally, the processor is configured to read the computer program in the memory and perform the following operations:
receiving a fourth request message sent by a fourth network function, wherein the fourth request message carries a requested service type, and the service type indicates data flow classification;
and sending a fifth response message to the fourth network function according to the fourth request message, wherein the fifth response message carries information of one or more first network functions capable of providing data flow classification services.
Wherein in fig. 15, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 1510 and various circuits of memory represented by memory 1520, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. Transceiver 1500 may be a number of elements, including a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium. The processor 1510 is responsible for managing the bus architecture and general processing, and the memory 1520 may store data used by the processor 1510 in performing operations.
The processor 1510 may be a Central Processing Unit (CPU), application SPECIFIC INTEGRATED Circuit (ASIC), field-Programmable gate array (Field-Programmable GATE ARRAY, FPGA), or complex Programmable logic device (Complex Programmable Logic Device, CPLD), or may employ a multi-core architecture.
It should be noted that, the above device provided in this embodiment of the present application can implement all the method steps implemented in the method embodiment applied to the second network function, and can achieve the same technical effects, and detailed descriptions of the same parts and beneficial effects as those in the method embodiment in this embodiment are omitted.
In addition, the embodiment of the present application further provides a processor readable storage medium, on which a computer program is stored, where the program when executed by a processor implements the steps of the data stream classification method described above, and the same technical effects can be achieved, and for avoiding repetition, a detailed description is omitted herein. The readable storage medium may be any available medium or data storage device that can be accessed by a processor, including, but not limited to, magnetic storage (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical storage (e.g., CD, DVD, BD, HVD, etc.), and semiconductor storage (e.g., ROM, EPROM, EEPROM, nonvolatile storage (NAND FLASH), solid State Disk (SSD)), etc.
In addition, the embodiment of the present application further provides a computer program product, which includes computer instructions, and the computer instructions when executed by a processor implement the steps of the data stream classification method described above, and achieve the same technical effects, so that repetition is avoided, and no further description is provided herein.
It should be noted that the technical solution provided by the embodiment of the present application may be applicable to various systems, especially a 5G system. For example, applicable systems may be global system for mobile communications (global system of mobile communication, GSM), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) universal packet Radio service (GENERAL PACKET Radio service, GPRS), long term evolution (long term evolution, LTE), LTE frequency division duplex (frequency division duplex, FDD), LTE time division duplex (time division duplex, TDD), long term evolution-advanced (long term evolution advanced, LTE-a), universal mobile system (universal mobile telecommunication system, UMTS), worldwide interoperability for microwave access (worldwide interoperability for microwave access, wiMAX), 5G New air interface (New Radio, NR) systems, and the like. Terminal devices and network devices are included in these various systems. Core network parts may also be included in the system, such as Evolved packet system (Evolved PACKET SYSTEM, EPS), 5G system (5 GS), etc.
The terminal device according to the embodiment of the present application may be a device that provides voice and/or data connectivity to a user, a handheld device with a wireless connection function, or other processing devices connected to a wireless modem, etc. The names of the terminal devices may also be different in different systems, for example in a 5G system, the terminal devices may be referred to as User Equipment (UE). The wireless terminal device may communicate with one or more Core Networks (CNs) via a radio access Network (Radio Access Network, RAN), which may be mobile terminal devices such as mobile phones (or "cellular" phones) and computers with mobile terminal devices, e.g., portable, pocket, hand-held, computer-built-in or vehicle-mounted mobile devices that exchange voice and/or data with the radio access Network. Such as Personal communication services (Personal Communication Service, PCS) phones, cordless phones, session initiation protocol (Session Initiated Protocol, SIP) phones, wireless local loop (Wireless Local Loop, WLL) stations, personal digital assistants (Personal DIGITAL ASSISTANT, PDA) and the like. The wireless terminal device may also be referred to as a system, subscriber unit (subscriber unit), subscriber station (subscriber station), mobile station (mobile station), remote station (remote station), access point (access point), remote terminal device (remote terminal), access terminal device (ACCESS TERMINAL), user terminal device (user terminal), user agent (user agent), user equipment (user device), and embodiments of the present application are not limited.
The network device according to the embodiment of the present application may be a base station, where the base station may include a plurality of cells for providing services for the terminal. A base station may also be called an access point or may be a device in an access network that communicates over the air-interface, through one or more sectors, with wireless terminal devices, or other names, depending on the particular application. The network device may be configured to exchange received air frames with internet protocol (Internet Protocol, IP) packets as a router between the wireless terminal device and the rest of the access network, which may include an Internet Protocol (IP) communication network. The network device may also coordinate attribute management for the air interface. For example, the network device according to the embodiment of the present application may be a network device (Base Transceiver Station, BTS) in a global system for mobile communications (Global System for Mobile communications, GSM) or code division multiple access (Code Division Multiple Access, CDMA), a network device (NodeB) in a wideband code division multiple access (Wide-band Code Division Multiple Access, WCDMA), an evolved network device (evolutional Node B, eNB or e-NodeB) in a long term evolution (long term evolution, LTE) system, a 5G base station (gNB) in a 5G network architecture (next generation system), a home evolved base station (Home evolved Node B, heNB), a relay node (relay node), a home base station (femto), a pico base station (pico), etc., which are not limited in the embodiment of the present application. In some network structures, the network devices may include centralized unit (centralized unit, CU) nodes and Distributed Unit (DU) nodes, which may also be geographically separated.
Multiple-input Multiple-output (Multi Input Multi Output, MIMO) transmissions may be made between the network device and the terminal device, each using one or more antennas, and the MIMO transmissions may be Single User MIMO (SU-MIMO) or Multiple User MIMO (MU-MIMO). The MIMO transmission may be 2D-MIMO, 3D-MIMO, FD-MIMO, or massive-MIMO, or may be diversity transmission, precoding transmission, beamforming transmission, or the like, depending on the form and number of the root antenna combinations.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, magnetic disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flowchart and/or block of the flowchart illustrations and/or block diagrams, and combinations of flowcharts and/or block diagrams, can be implemented by computer-executable instructions. These computer-executable instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be stored in a processor-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the processor-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These processor-executable instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.
Claims (33)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410522182.2A CN120856652A (en) | 2024-04-28 | 2024-04-28 | Data stream classification method, device and equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202410522182.2A CN120856652A (en) | 2024-04-28 | 2024-04-28 | Data stream classification method, device and equipment |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| CN120856652A true CN120856652A (en) | 2025-10-28 |
Family
ID=97424515
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202410522182.2A Pending CN120856652A (en) | 2024-04-28 | 2024-04-28 | Data stream classification method, device and equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN120856652A (en) |
-
2024
- 2024-04-28 CN CN202410522182.2A patent/CN120856652A/en active Pending
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2022171154A1 (en) | Data analysis method and apparatus, electronic device, and storage medium | |
| CN116827497A (en) | Model transmission method, terminal and network side equipment | |
| WO2024067098A1 (en) | Model information reporting method, device and apparatus, and storage medium | |
| EP4404110A1 (en) | Federated learning group processing method and apparatus, and functional entity | |
| CN115002829B (en) | Wireless signal transmission method, communication device and storage medium | |
| JP7760074B2 (en) | Network analysis method, functional entity and storage medium | |
| CN120856652A (en) | Data stream classification method, device and equipment | |
| CN120034874A (en) | Information transmission method, device and equipment | |
| CN115811706B (en) | Channel state parameter transmission method and communication device | |
| CN119865831A (en) | Processing method and device for artificial intelligent service | |
| CN117768998A (en) | Network registration method and device | |
| CN120825758A (en) | Selection processing method and device | |
| WO2025209066A1 (en) | Model management method and apparatus, and terminal and network device | |
| WO2024217212A1 (en) | Resource scheduling method and device, and readable storage medium | |
| WO2024230468A1 (en) | Sensing method and device | |
| CN119383594A (en) | Data transmission method and device | |
| CN118784507A (en) | A model monitoring processing method and device | |
| WO2025130422A1 (en) | Information processing method and apparatus, terminal side device and network side device | |
| CN118036777A (en) | Model training method, model testing method, device and storage medium | |
| CN118250789A (en) | Terminal registration method and device based on distributed network | |
| WO2024067199A1 (en) | Resource coordination method and apparatus, and storage medium | |
| CN120238577A (en) | Processing method, device, network side equipment and terminal for computing power service | |
| WO2024234781A1 (en) | Srv6-based capability information reporting method and apparatus | |
| WO2024164893A1 (en) | Information processing method, and device and readable storage medium | |
| WO2024255502A1 (en) | Information transmission method and apparatus, and storage medium |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination |