[go: up one dir, main page]

WO2025232813A1 - Communication method and communication apparatus - Google Patents

Communication method and communication apparatus

Info

Publication number
WO2025232813A1
WO2025232813A1 PCT/CN2025/093323 CN2025093323W WO2025232813A1 WO 2025232813 A1 WO2025232813 A1 WO 2025232813A1 CN 2025093323 W CN2025093323 W CN 2025093323W WO 2025232813 A1 WO2025232813 A1 WO 2025232813A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
model
resource configuration
channel
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2025/093323
Other languages
French (fr)
Chinese (zh)
Inventor
田洋
陈家璇
柴晓萌
孙琰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of WO2025232813A1 publication Critical patent/WO2025232813A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Definitions

  • This application relates to the field of communication technology, and more specifically, to a communication method and a communication device.
  • network devices determine the downlink data channel resources, modulation and coding scheme (MCS), and precoding and other relevant downlink channel configuration information for scheduling terminal devices based on downlink channel state information (CSI).
  • Terminal devices calculate the downlink CSI by measuring the downlink reference signal and generate a CSI report, which is then fed back to the network devices.
  • MCS modulation and coding scheme
  • CSI downlink channel state information
  • AI artificial intelligence
  • the configuration of reference signals sent by network devices to terminal devices is flexible and variable, which may cause performance fluctuations in AI models during channel information processing. For example, if the AI model on the terminal device side is an AI CSI compression model, it may lead to performance fluctuations in CSI feedback. Similarly, if the AI model on the terminal device side is an AI CSI prediction model, it may lead to performance fluctuations in CSI prediction.
  • This application provides a communication method that can guarantee or improve the performance of an AI model when a network device performs different resource configurations to a terminal device.
  • a communication method which can be executed by a first device.
  • the first device can be a device on the first AI model side, or a chip or circuit of the device on the first AI model side.
  • the device on the first AI model side can be replaced by a device on the terminal device side, which can include at least one of a terminal device or an AI entity on the terminal device side.
  • the AI entity on the terminal device side can be the terminal device itself, or an AI entity serving the terminal device, such as a server, like an over-the-top (OTT) server or a cloud server.
  • OTT over-the-top
  • the method includes: receiving first indication information, the first indication information indicating a first resource configuration, the first resource configuration being one of at least one resource configuration supported by a first artificial intelligence (AI) model, wherein the first AI model is used to process channel information measured based on the first resource configuration.
  • AI artificial intelligence
  • the resource configuration supported by the first AI model can be understood as a resource configuration that does not exceed the capabilities of the first AI model; for example, the resource configuration could be a CSI-RS configuration. It should be noted that, in this embodiment, the terminal device can determine the resource configuration supported by the AI model by consulting the AI model's technical documentation or related instructions.
  • the terminal device receives a first instruction information sent by the network device to indicate a first resource configuration.
  • the first resource configuration is a resource configuration supported by a first AI model on the terminal device side, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.
  • the resource configuration includes: configuration type, offset of adjacent resources, or number of transmissions.
  • the configuration type includes at least one of the following: periodic configuration, semi-static configuration, or aperiodic configuration.
  • the first AI model is used to process channel information measured based on the first resource configuration, and further includes: determining a first channel report, which is determined based on the processing of the channel information measured based on the first resource configuration by the first AI model.
  • the terminal device processes the channel information measured based on the first resource configuration using the first AI model and generates a first channel report, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations on the terminal device.
  • the first channel report can be replaced by the first CSI report, or the first CSI feedback information, or the first CSI compressed information, etc. This application does not impose any limitations on this.
  • the method further includes: sending first information, which is associated with the at least one resource configuration.
  • the terminal device sends first information to the network device, the first information being associated with at least one resource configuration supported by the first AI model.
  • the network device selects the first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device provides different resource configurations to the terminal device.
  • the first information includes the at least one resource configuration.
  • the terminal device reports at least one resource configuration supported by the first AI model to the network device, and the network device selects the first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device provides different resource configurations to the terminal device.
  • the first information may also include at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.
  • the first information includes the identification information of the first AI model.
  • the terminal device sends the identification information of the first AI model to the network device.
  • the network device determines at least one resource configuration supported by the first AI model based on the identification information of the first AI model, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device. This can guarantee or improve the performance of the AI model when the network device performs different resource configurations on the terminal device.
  • the method further includes: sending second indication information, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration.
  • the network device determines at least one resource configuration supported by the first AI model according to the identification information of the first AI model and the second indication information, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.
  • the second instruction information further indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model is different from the first AI model.
  • the network device determines at least one resource configuration supported by the first AI model according to the identification information of the first AI model and the second instruction information, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.
  • the second AI model can be one or more AI models that are different from the first AI model.
  • sending the second instruction information includes: sending second information, which includes the second instruction information, and the second information includes any one of the following: model-related information, registration request information, and terminal device capability information. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device.
  • the first information includes functional information of the first AI model, which includes any one of the following:
  • Time-domain channel state information CSI prediction post-compression function information
  • the performance of AI models can be guaranteed or improved when network devices allocate different resources to terminal devices.
  • some implementations of the first aspect include: sending a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated based on channel information obtained from second resource configuration measurements processed by the third AI model. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device.
  • the third instruction information may be the identification information of a third AI model.
  • the third AI model can be the first AI model described above
  • the second resource configuration can be the first resource configuration described above
  • the second channel report can be the first channel report described above.
  • the terminal device sends the second channel report and the third indication information to the network device, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.
  • the method further includes: receiving fourth indication information, the fourth indication information indicating the configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.
  • the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model (e.g., a third AI model), which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.
  • the most suitable AI model e.g., a third AI model
  • the method further includes: determining the information of the second channel report based on the second resource configuration. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.
  • the information reported by the second channel includes at least one of the following: the number of resources corresponding to one or more measurement resources, the measurement results corresponding to one or more measurement resources, the time-domain information corresponding to one or more measurement resources, the frequency-domain information corresponding to one or more measurement resources, the spatial-domain information corresponding to one or more measurement resources, and the number of bits corresponding to each resource information in one or more measurement resources.
  • a communication method which can be executed by a first device.
  • the first device can be a device on the first AI model side, or a chip or circuit of the device on the first AI model side.
  • the device on the first AI model side can be replaced by a device on the terminal device side.
  • the terminal device side can include at least one of a terminal device or an AI entity on the terminal device side.
  • the AI entity on the terminal device side can be the terminal device itself, or an AI entity serving the terminal device, such as a server, such as an over-the-top (OTT) server or a cloud server.
  • OTT over-the-top
  • the method includes: sending a second channel report and a third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing channel information obtained from second resource configuration measurements based on the third AI model.
  • the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model (e.g., the third AI model), and sends third indication information to the network device so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.
  • the most suitable AI model e.g., the third AI model
  • the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.
  • the method further includes: receiving fourth indication information, which indicates the configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.
  • the terminal device matches the configuration information of the channel report sent by the network device in real time and selects the most suitable AI model (e.g., a third AI model), which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.
  • the most suitable AI model e.g., a third AI model
  • the method further includes: determining the information of the second channel report based on the second resource configuration. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.
  • the information reported by the second channel includes at least one of the following: the number of resources corresponding to one or more measurement resources, the measurement results corresponding to one or more measurement resources, the time-domain information corresponding to one or more measurement resources, the frequency-domain information corresponding to one or more measurement resources, the spatial-domain information corresponding to one or more measurement resources, and the number of bits corresponding to each resource information in one or more measurement resources.
  • a communication method is provided, which can be executed by a second device.
  • the second device may include at least one of a network device or an AI entity on the network device side.
  • the AI entity on the network device side can be the network device itself or an AI entity serving the network device (it can also be replaced by intelligent network elements, etc., without limitation), such as a radio access network (RAN), an intelligent controller (RIC), operation administration and maintenance (OAM), a server, etc., such as a cloud server.
  • an intelligent network element can also be understood as a network device with AI functionality, which can be applied in O-RAN architecture or non-O-RAN architecture, without limitation.
  • the method includes: determining a first resource configuration, the first resource configuration being one of at least one resource configuration supported by a first AI model, the first AI model being used to process channel information measured based on the first resource configuration; and sending first indication information, the first indication information indicating the first resource configuration.
  • the resource configuration includes: configuration type, offset of adjacent resources or number of transmissions, and the configuration type includes at least one of the following: periodic configuration, semi-static configuration or non-periodic configuration.
  • the method further includes: obtaining first information associated with the at least one resource configuration.
  • the first information includes the at least one resource configuration.
  • the first information includes the identification information of the first AI model.
  • the method further includes: obtaining second indication information, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration.
  • the second instruction information also indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, the second AI model being different from the first AI model.
  • receiving the second instruction information includes: obtaining second information, which includes the second instruction information, and the second information includes any one of the following: model-related information, registration request information, and terminal device capability information.
  • the first information includes functional information of the first AI model, which includes any one of the following:
  • some implementations of the third aspect include: acquiring a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model.
  • the method further includes: sending fourth indication information, the fourth indication information indicating configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.
  • the information reported by the second channel includes at least one of the following:
  • the number of bits corresponding to each resource information in one or more measurement resources is not limited.
  • a communication method is provided, which can be executed by a second device, the second device including at least one of a network device or an AI entity on the network device side.
  • the AI entity on the network device side can be the network device itself or an AI entity serving the network device, such as a radio access network (RAN), a RAN intelligent controller (RIC), or an operation administration and maintenance (OAM) system.
  • RAN radio access network
  • RIC RAN intelligent controller
  • OAM operation administration and maintenance
  • the method includes: acquiring a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing channel information obtained from second resource configuration measurements based on the third AI model.
  • the method further includes: sending fourth indication information, which indicates configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.
  • the information reported by the second channel includes at least one of the following:
  • the number of bits corresponding to each resource information in one or more measurement resources is not limited.
  • a communication device which may be a first device, or a device or module for performing the functions of the first device.
  • the communication device may include modules or units corresponding to the methods/operations/steps/actions described in the first aspect, which may be hardware circuits, software, or a combination of hardware circuits and software.
  • the first device mentioned above can be a terminal device or an AI entity on the terminal device side, and there is no limitation thereto.
  • a communication device which may be a second device, or a device or module for performing the functions of the second device.
  • the communication device may include modules or units corresponding to the methods/operations/steps/actions described in the second aspect, which may be hardware circuits, software, or a combination of hardware circuits and software.
  • the second device mentioned above can be a network device or an AI entity on the network device side.
  • a seventh aspect provides a communication device including a processor configured to, by executing a computer program or instructions, or by logic circuitry, cause the communication device to perform the method described in the first aspect and any possible manner of the first aspect; or cause the communication device to perform the method described in the second aspect and any possible manner of the second aspect; or cause the communication device to perform the method described in the third aspect and any possible manner of the third aspect; or cause the communication device to perform the method described in the fourth aspect and any possible manner of the fourth aspect.
  • the communication device also includes a memory for storing the computer program or instructions.
  • the communication device also includes a communication interface for inputting and/or outputting signals.
  • a communication device including logic circuitry and an input/output interface for inputting and/or outputting signals.
  • the input/output interface is configured to perform the method described in the first aspect and any possible mode of the first aspect; or the logic circuitry is configured to perform the method described in the second aspect and any possible mode of the second aspect; or the logic circuitry is configured to perform the method described in the third aspect and any possible mode of the third aspect; or the logic circuitry is configured to perform the method described in the fourth aspect and any possible mode of the fourth aspect.
  • a ninth aspect provides a computer-readable storage medium storing a computer program or instructions that, when executed on a computer, cause the method described in the first aspect and any possible manner of the first aspect to be performed; or cause the method described in the second aspect and any possible manner of the second aspect to be performed; or cause the method described in the third aspect and any possible manner of the third aspect to be performed; or cause the method described in the fourth aspect and any possible manner of the fourth aspect to be performed.
  • a computer program product comprising instructions that, when executed on a computer, cause the method described in the first aspect and any possible mode of the first aspect to be executed; or cause the method described in the second aspect and any possible mode of the second aspect to be executed; or cause the method described in the third aspect and any possible mode of the third aspect to be executed; or cause the method described in the fourth aspect and any possible mode of the fourth aspect to be executed.
  • a chip or chip system comprising: one or more processors configured to execute computer programs or instructions in the memory, such that the chip or chip system implements the methods of the first aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the second aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the third aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the fourth aspect and any possible implementation thereof.
  • Figure 1 is a schematic diagram of an application framework applicable to an embodiment of this application.
  • FIG. 2 is a schematic diagram of another application framework applicable to the embodiments of this application.
  • Figure 3 is a schematic diagram of a communication system applicable to an embodiment of this application.
  • Figure 4 is a schematic diagram of another communication system applicable to the embodiments of this application.
  • Figure 5 is a schematic diagram of the relationship between the encoder and the decoder.
  • Figure 6 is a schematic flowchart of a communication method 600 provided in an embodiment of this application.
  • Figure 7 is a schematic flowchart of a communication method 700 provided in an embodiment of this application.
  • Figure 8 is a schematic flowchart of a communication method 800 provided in an embodiment of this application.
  • Figure 9 is a schematic flowchart of a communication method 900 provided in an embodiment of this application.
  • Figure 10 is a schematic flowchart of a communication method 1000 provided in an embodiment of this application.
  • Figure 11 is a schematic flowchart of a communication method 1100 provided in an embodiment of this application.
  • Figure 12 is a schematic flowchart of another communication method 1200 according to an embodiment of this application.
  • Figure 13 is a schematic flowchart of another communication method 1300 according to an embodiment of this application.
  • Figure 14 is a schematic flowchart of another communication method 1400 according to an embodiment of this application.
  • Figure 15 is a schematic block diagram of a communication device according to an embodiment of this application.
  • Figure 16 is another schematic block diagram of a communication device according to an embodiment of this application.
  • any embodiment or design described in this application as “exemplary” or “for example” should not be construed as being more preferred or advantageous than other embodiments or designs.
  • the use of terms such as “exemplary” or “for example” is intended to present the relevant concepts in a concrete manner for ease of understanding.
  • the information that enables the information is called the information to be enabled.
  • the information to be enabled there are many ways to enable the information to be enabled, such as, but not limited to, directly enabling the information to be enabled, such as the information to be enabled itself or its index. It can also be indirectly enabled by enabling other information, where there is a relationship between the other information and the information to be enabled. It can also enable only a part of the information to be enabled, while the other parts are known or pre-agreed upon.
  • enabling specific information can be achieved by using a pre-agreed (e.g., protocol-defined) arrangement of various pieces of information, thereby reducing enabling overhead to some extent. Simultaneously, common parts of various pieces of information can be identified and enabled uniformly to reduce the enabling overhead caused by individually enabling the same information.
  • pre-configuration may include pre-defined terms, such as protocol definitions. These "pre-defined terms” can be implemented by pre-storing corresponding codes, tables, or other means of indicating relevant information in the device (e.g., including various network elements). This application does not limit the specific implementation method.
  • storage or “preservation” in this application can refer to storage in one or more memory devices. These memory devices can be separately configured or integrated into an encoder or decoder, processor, or communication device. Alternatively, some memory devices can be separately configured, while others can be integrated into the decoder, processor, or communication device.
  • the type of memory can be any form of storage medium, and this is not limited.
  • the “protocol” involved in this application may refer to standard protocols in the field of communications, such as fourth-generation (4G) network protocols, fifth-generation (5G) network protocols, new radio (NR) protocols, 5.5G network protocols, future network protocols, and related protocols applied in future communication systems. This application does not limit the scope of the term.
  • 4G fourth-generation
  • 5G fifth-generation
  • NR new radio
  • A/B can mean A or B.
  • and/or is merely a description of the relationship between the related objects, indicating that there can be three relationships.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone.
  • a and B can be singular or plural.
  • the indication includes direct indication (also known as explicit indication) and implicit indication.
  • Direct indication information A means including information A.
  • Implicit indication information A means indicating information A through the correspondence between information A and information B, and through direct indication information B. The correspondence between information A and information B can be predefined, pre-stored, pre-burned, or pre-configured.
  • information C is used to determine information D, including both when information D is determined solely based on information C and when it is determined based on information C and other information. Furthermore, information C can also be used to determine information D indirectly, for example, when information D is determined based on information E, and information E is determined based on information C.
  • device A sends information A to device B
  • device B can be understood as device B being the destination of information A or an intermediate network element in the transmission path between the destination and the destination, which may include sending information to device B directly or indirectly.
  • device B receives information A from device A
  • device A can be understood as device A being the source of information A or an intermediate network element in the transmission path between device B and device A. This can include receiving information directly or indirectly from device A. Information may undergo necessary processing, such as format changes, between the source and destination ends, but the destination end can understand the valid information from the source end. Similar expressions in this application can be interpreted similarly and will not be elaborated further here.
  • the technical solutions provided in this application can be applied to various communication systems, such as 5G or NR systems, Long Term Evolution (LTE) systems, LTE Frequency Division Duplex (FDD) systems, LTE Time Division Duplex (TDD) systems, Wireless Local Area Network (WLAN) systems, satellite communication systems, future communication systems such as future mobile communication systems, or integrated systems of multiple systems.
  • LTE Long Term Evolution
  • FDD Frequency Division Duplex
  • TDD Time Division Duplex
  • WLAN Wireless Local Area Network
  • satellite communication systems future communication systems
  • future communication systems such as future mobile communication systems, or integrated systems of multiple systems.
  • D2D device-to-device
  • V2X vehicle-to-everything
  • M2M machine-to-machine
  • MTC machine-type communication
  • IoT Internet of Things
  • a device can send signals to or receive signals from another device. Signals may include information, signaling, or data. A device can also be replaced by an entity, network entity, equipment, communication device, communication module, node, communication node, etc.
  • This application describes a device as an example.
  • a communication system may include at least one terminal device and at least one network device. The network device can send downlink signals to the terminal device, and/or the terminal device can send uplink signals to the network device, and/or, the terminal device can send sidelink signals to another terminal device, and/or, the network device can send signals to another network device.
  • the terminal device may also be referred to as user equipment (UE), access terminal, user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.
  • UE user equipment
  • Terminal devices can be devices that provide voice/data, such as handheld devices and in-vehicle devices with wireless connectivity.
  • terminals include: mobile phones, tablets, laptops, PDAs, mobile internet devices (MIDs), wearable devices, virtual reality (VR) devices, augmented reality (AR) devices, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical surgery, wireless terminals in smart grids, and wireless terminals in transportation safety.
  • MIDs mobile internet devices
  • VR virtual reality
  • AR augmented reality
  • the embodiments of this application do not limit the scope to wireless terminals in smart cities, smart homes, cellular phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDAs), handheld devices with wireless communication capabilities, computing devices or other processing devices connected to a wireless modem, wearable devices, terminal devices in 5G networks, or terminal devices in future evolved public land mobile networks (PLMNs).
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDAs personal digital assistants
  • handheld devices with wireless communication capabilities computing devices or other processing devices connected to a wireless modem
  • wearable devices wearable devices
  • terminal devices in 5G networks or terminal devices in future evolved public land mobile networks (PLMNs).
  • PLMNs public land mobile networks
  • the terminal device can also be a wearable device.
  • Wearable devices also known as wearable smart devices, are a general term for devices that utilize wearable technology to intelligently design and develop everyday wearables, such as glasses, gloves, watches, clothing, and shoes.
  • Wearable devices are portable devices that are worn directly on the body or integrated into the user's clothing or accessories.
  • Wearable devices are not merely hardware devices; they achieve powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include those that are feature-rich, large in size, and can achieve complete or partial functionality without relying on a smartphone, such as smartwatches or smart glasses, as well as those that focus on a specific application function and require the use of other devices such as smartphones, such as various smart bracelets and smart jewelry for vital sign monitoring.
  • the device for implementing the functions of the terminal device can be the terminal device itself, or it can be any device capable of supporting the terminal device in implementing those functions, such as a chip system.
  • This device can be installed in or used in conjunction with the terminal device.
  • the chip system can be composed of chips or may include chips and other discrete components.
  • This embodiment only uses the terminal device as an example to illustrate the device for implementing the functions of the terminal device, and does not constitute a limitation on the solution of this embodiment.
  • the network device in this application embodiment may include a device for communicating with a terminal device.
  • This network device may include an access network device or a radio access network device, such as a base station.
  • the network device in this application embodiment may also include a radio access network (RAN) node (or device) for connecting the terminal device to a wireless network.
  • RAN radio access network
  • a base station can broadly encompass, or be replaced by, various names including: NodeB, evolved NodeB (eNB), next-generation NodeB (gNB), relay station, access point, transmitting and receiving point (TRP), transmitting point (TP), master station, auxiliary station, motor slide retainer (MSR) node, home base station, network controller, access node, wireless node, and access point.
  • the network includes points (APs), transmission nodes, transceiver nodes, baseband units (BBUs), remote radio units (RRUs), active antenna units (AAUs), remote radio heads (RRHs), central units (CUs), distributed units (DUs), radio units (RUs), positioning nodes, and RAN intelligent controllers (RICs).
  • a base station can also be a macro base station, micro base station, relay node, donor node, or similar entity, or a combination thereof.
  • a base station can also refer to a communication module, modem, or chip installed within the aforementioned equipment or apparatus.
  • a base station can also be a mobile switching center and equipment performing base station functions in D2D, V2X, and M2M communications, network-side equipment in future networks, or equipment performing base station functions in future communication systems.
  • a base station can support networks using the same or different access technologies.
  • RAN nodes can also be servers, wearable devices, vehicles, or in-vehicle equipment.
  • the access network equipment in vehicle-to-everything (V2X) technology can be a roadside unit (RSU).
  • RSU roadside unit
  • network devices can be devices that include CUs or DUs, or devices that include both CUs and DUs, or devices with control plane CU nodes (central unit-control plane (CU-CP)) and user plane CU nodes (central unit-user plane (CU-UP)) and DU nodes.
  • control plane CU nodes central unit-control plane (CU-CP)
  • user plane CU nodes central unit-user plane (CU-UP)
  • DU nodes central unit-user plane
  • network devices include gNB-CU-CP, gNB-CU-UP, and gNB-DU.
  • RAN nodes collaborate to assist terminals in achieving wireless access, with different RAN nodes each implementing some of the base station's functions.
  • RAN nodes can be CUs, DUs, CU-CPs, CU-UPs, or RUs.
  • CUs and DUs can be configured separately or included in the same network element, such as a BBU.
  • RUs can be included in radio frequency equipment or radio frequency units, such as RRUs, AAUs, or RRHs.
  • RAN nodes can support one or more types of fronthaul interfaces, and different fronthaul interfaces correspond to DU and RU with different functions.
  • some baseband functions for downlink and/or uplink such as, for downlink, precoding, digital beamforming (BF), or one or more of inverse fast fourier transform (IFFT)/cyclic prefix addition (CP), are moved from DU to RU; and for uplink, digital beamforming (BF), or one or more of fast fourier transform (FFT)/cyclic prefix removal (CP) are moved from DU to RU.
  • IFFT inverse fast fourier transform
  • CP cyclic prefix removal
  • the interface can be an enhanced common public radio interface (eCPRI).
  • eCPRI enhanced common public radio interface
  • the segmentation between DU and RU differs, corresponding to different categories (Cat) of eCPRI, such as eCPRI Cat A, B, C, D, E, and F.
  • Cat categories of eCPRI
  • the DU is configured to implement one or more functions before and after the layer mapping (i.e., coding, rate matching, scrambling, modulation, layer mapping).
  • Other functions after the layer mapping e.g., resource element (RE) mapping, digital beamforming (BF), or one or more of inverse fast Fourier transform (IFFT)/adding a cyclic prefix (CP)
  • the de-RE mapping is used as the dividing line.
  • the DU is configured to implement one or more functions preceding de-mapping (i.e., decoding, rate matching de-matching, descrambling, demodulation, inverse discrete Fourier transform (IDFT), channel equalization, and one or more functions from de-RE mapping).
  • Other functions following de-mapping e.g., one or more functions from digital BF or fast Fourier transform (FFT)/CP removal
  • FFT fast Fourier transform
  • the processing unit in the BBU used to implement baseband functions is called the baseband high (BBH) unit
  • the processing unit in the RRU/AAU/RRH used to implement baseband functions is called the baseband low (BBL) unit.
  • CU or CU-CP and CU-UP
  • DU or RU
  • RU may have different names, but those skilled in the art will understand their meaning.
  • ORAN open RAN
  • CU can also be called O-CU (open CU)
  • DU can also be called O-DU
  • CU-CP can also be called O-CU-CP
  • CU-UP can also be called O-CU-UP
  • RU can also be called O-RU.
  • Any of the units among CU (or CU-CP, CU-UP), DU, and RU in this application can be implemented through software modules, hardware modules, or a combination of software modules and hardware modules.
  • the apparatus for implementing the functions of a network device can be a network device itself; it can also be an apparatus capable of supporting the network device in implementing those functions, such as a chip system, hardware circuit, software module, or a hardware circuit plus a software module.
  • This apparatus can be installed in the network device or used in conjunction with the network device.
  • the example of a network device being used to implement the functions of a network device is provided only and does not constitute a limitation on the solutions described in this embodiment.
  • Network devices and/or terminal devices can be deployed on land, including indoors or outdoors, handheld or vehicle-mounted; they can also be deployed on water; and they can also be deployed in the air on airplanes, balloons, and satellites. This application does not limit the scenario in which the network devices and terminal devices are located.
  • terminal devices and network devices can be hardware devices, software functions running on dedicated hardware, software functions running on general-purpose hardware, such as virtualization functions instantiated on a platform (e.g., a cloud platform), or entities that include dedicated or general-purpose hardware devices and software functions.
  • a platform e.g., a cloud platform
  • This application does not limit the specific form of terminal devices and network devices.
  • AI nodes also known as AI entities
  • AI entities may be introduced into the network.
  • the AI entity can be deployed in one or more of the following locations within the communication system: access network devices, terminal devices, or core network devices, or the AI entity can be deployed independently, for example, in a location other than any of the aforementioned devices, such as the host or cloud server of an OTT system.
  • the AI entity can communicate with other devices in the communication system, which can be one or more of the following: network devices, terminal devices, or network elements of the core network.
  • the AI entity can include an AI entity on the network device side, an AI entity on the terminal device side, or an AI entity on the core network side.
  • this application does not limit the number of AI entities. For example, when there are multiple AI entities, they can be divided based on function, such as different AI entities being responsible for different functions.
  • AI entities can be AI network elements or AI modules. AI entities are used to implement corresponding AI functions. AI modules deployed in different network elements can be the same or different. Depending on the different parameter configurations, the AI model within an AI entity can achieve different functions.
  • the AI model within an AI entity can be configured based on one or more of the following parameters: structural parameters (e.g., at least one of the following: number of neural network layers, neural network width, inter-layer connections, neuron weights, neuron activation function, or biases in the activation function), input parameters (e.g., the type and/or dimension of the input parameters), or output parameters (e.g., the type and/or dimension of the output parameters).
  • the biases in the activation function can also be referred to as the biases of the neural network.
  • An AI entity can have one or more models.
  • the learning, training, or inference processes of different models can be deployed in different entities or devices, or they can be deployed in the same entity or device.
  • FIG. 1 is a schematic diagram of an application framework applicable to an embodiment of this application.
  • devices are connected via interfaces (e.g., NG, Xn) or over-the-air interfaces.
  • These device nodes such as core network devices, access network nodes (RAN nodes), terminals, or one or more devices in OAM, are equipped with one or more AI modules (only one is shown in Figure 1 for clarity).
  • An access network node can be a single RAN node or can include multiple RAN nodes, for example, including CU and DU.
  • CU and/or DU can also be equipped with one or more AI modules.
  • a CU can also be split into CU-CP and CU-UP.
  • One or more AI models are configured in CU-CP and/or CU-UP.
  • This AI module is used to implement corresponding AI functions.
  • AI modules deployed in different devices can be the same or different.
  • the AI module can achieve different functions.
  • the AI module model can be configured based on one or more of the following parameters: structural parameters (e.g., at least one of the following: number of neural network layers, neural network width, inter-layer connections, neuron weights, neuron activation function, or biases in the activation function), input parameters (e.g., the type and/or dimension of the input parameters), or output parameters (e.g., the type and/or dimension of the output parameters).
  • the biases in the activation function can also be referred to as the biases of the neural network.
  • An AI module can have one or more models.
  • a model can infer an output, which includes one or more parameters.
  • the learning, training, or inference processes of different models can be deployed on different nodes or devices, or they can be deployed on the same node or device.
  • FIG. 2 is a schematic diagram of another application framework applicable to the embodiments of this application.
  • the communication system includes a Resource Interchange (RIC).
  • the RIC can be the AI module in the RAN node shown in Figure 1, used to implement AI-related functions.
  • RICs include near-real-time RICs (near-RT RICs) and non-real-time RICs (non-RT RICs).
  • Non-real-time RICs mainly process non-real-time information, such as data that is not sensitive to latency, with latency in the order of seconds.
  • Real-time RICs mainly process near-real-time information, such as data that is relatively sensitive to latency, with latency in the order of tens of milliseconds.
  • Near real-time RICs are used for model training and inference. For example, they are used to train AI models and then use those models for inference.
  • Near real-time RICs can obtain network-side and/or terminal-side information from RAN nodes (e.g., CU, CU-CP, CU-UP, DU, and/or RU) and/or terminals. This information can be used as training data or inference data.
  • RAN nodes e.g., CU, CU-CP, CU-UP, DU, and/or RU
  • near real-time RIC can deliver inference results to RAN nodes and/or terminals.
  • inference results can be exchanged between CU and DU, and/or between DU and RU.
  • near real-time RIC submits inference results to DU, and DU sends them to RU.
  • Non-real-time RICs are also used for model training and inference. For example, they are used to train AI models and then use those models for inference.
  • Non-real-time RICs can obtain network-side and/or terminal-side information from RAN nodes (e.g., CU, CU-CP, CU-UP, DU, and/or RU) and/or terminals. This information can be used as training data or inference data, and the inference results can be delivered to the RAN nodes and/or terminals.
  • RAN nodes e.g., CU, CU-CP, CU-UP, DU, and/or RU
  • inference results can be exchanged between CU and DU, and/or between DU and RU.
  • a non-real-time RIC can submit inference results to DU, which in turn can send them to RU.
  • Near real-time RICs and non-real-time RICs can also be configured as separate devices. Alternatively, near real-time RICs and non-real-time RICs can also be part of other devices. For example, near real-time RICs can be configured in RAN nodes (e.g., CU, DU), while non-real-time RICs can be configured in OAM, cloud servers, core network devices, or other devices.
  • RAN nodes e.g., CU, DU
  • non-real-time RICs can be configured in OAM, cloud servers, core network devices, or other devices.
  • Figure 3 is a schematic diagram of a communication system applicable to an embodiment of this application.
  • the communication system may include at least one network device, such as network device 110; the communication system 100 may also include at least one terminal device, such as terminal device 120 and terminal device 130.
  • Network device 110 and terminal devices can communicate via a wireless link.
  • the communication devices in this communication system for example, network device 110 and terminal device 120, can communicate via multi-antenna technology.
  • FIG 4 is a schematic diagram of another communication system applicable to the embodiments of this application.
  • the communication system shown in Figure 4 further includes an AI device 140, which is used to perform AI-related operations, such as building a training dataset or training an AI model.
  • the AI device 140 is the aforementioned AI node or AI entity.
  • network device 110 sends data related to the training of the AI model to AI device 140, whereby AI device 140 constructs a training dataset and trains the AI model.
  • the data related to the training of the AI model may include data reported by the terminal device.
  • AI device 140 sends the results of operations related to the AI model to network device 110, which then forwards them to the terminal device.
  • the results of operations related to the AI model may include at least one of the following: a trained AI model, model evaluation results, or test results.
  • a portion of the trained AI model is deployed on network device 110, and another portion is deployed on the terminal device.
  • the trained AI model is deployed on network device 110.
  • the trained AI model is deployed on the terminal device.
  • Figure 4 only illustrates the example of AI device 140 being directly connected to network device 110.
  • AI device 140 can also be connected to terminal devices; AI device 140 can also be connected to both network device 110 and terminal devices simultaneously; AI device 140 can also be connected to network device 110 through third-party devices, etc. Therefore, this application does not limit the connection relationship between AI device and other devices.
  • the AI device 140 can also be installed as a module in network devices and/or terminal devices, for example, in network device 110 or terminal device as shown in FIG3.
  • Figures 3 and 4 are simplified schematic diagrams for ease of understanding.
  • the communication system may also include other devices, such as wireless relay devices and/or wireless backhaul devices, which are not shown in Figures 3 and 4.
  • the communication system may include multiple network devices (such as network device 110 and network device 150 (not shown in Figure 3)) and multiple terminal devices. Therefore, this application does not limit the number of network devices and terminal devices included in the communication system.
  • An AI model is an algorithm or computer program that enables AI functionality. It represents the mapping relationship between the model's input and output.
  • An AI model can also be called a model, an AI function, or a function.
  • One AI function can correspond to one or more AI models.
  • AI models can be neural networks, linear regression models, decision tree models, support vector machines (SVM), Bayesian networks, Q-learning models, or other machine learning (ML) models.
  • SVM support vector machines
  • ML machine learning
  • a two-sided model also known as a bilateral model, collaborative model, dual model, or two-side model, refers to a model composed of multiple sub-models. These sub-models need to be mutually compatible and can be deployed on different nodes.
  • This application embodiment relates to an encoder for compressing CSI and a decoder for recovering CSI.
  • the encoder and decoder are used in conjunction, and can be understood as paired AI models.
  • An encoder may include one or more AI models, and the decoder matched with the encoder also includes one or more AI models. The number of AI models included in the matched encoder and decoder are the same and correspond one-to-one.
  • the encoder may also include a quantization module, which can be used to quantize the output of the AI model in the encoder.
  • the decoder may include an inverse quantization module, which can be used to inverse quantize the feedback information of the received channel information to obtain the input of the AI model in the decoder. Inverse quantization can also be replaced by dequantization.
  • a set of matched encoders and decoders can be two parts of the same autoencoder (AE).
  • An AE model where the encoder and decoder are deployed on different nodes is a typical bilateral model. In other AE models, the encoder and decoder are usually co-trained and used in combination.
  • An autoencoder is an unsupervised learning neural network that uses input data as labeled data; therefore, it can also be understood as a self-supervised learning neural network.
  • Autoencoders can be used for data compression and reconstruction. For example, the encoder in an autoencoder can compress (encode) data A to obtain data B; the decoder in an autoencoder can decompress (decode) data B to recover data A. Alternatively, the decoder can be understood as the inverse operation of the encoder. A description of the encoder and decoder can be found in Figure 5.
  • Figure 5 is a schematic diagram of the relationship between the encoder and the decoder. As shown in Figure 5, the encoder processes the input V to obtain the processed result z, and the decoder can decode the encoder's output z into the desired output V'.
  • the self-encoding model in this application embodiment may include an encoder deployed on the terminal device side and a decoder deployed on the network device side, or an encoder deployed on the terminal device side and a decoder deployed on another terminal device side, or an encoder deployed on the network device side and a decoder deployed on another network device side.
  • Neural networks are a specific implementation of AI or machine learning. According to the general approximation theorem, neural networks can theoretically approximate any continuous function, thus enabling them to learn arbitrary mappings.
  • a neural network can be composed of neural units, which can be defined as computational units that take x, s , and an intercept of 1 as inputs.
  • a neural network is a network formed by connecting many of these individual neural units together; that is, the output of one neural unit can be the input of another.
  • the input of each neural unit can be connected to the local receptive field of the previous layer to extract features from that local receptive field, which can be a region composed of several neural units.
  • the AI model involved in this application can be a deep neural network (DNN).
  • DNN can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc.
  • FNN feedforward neural networks
  • CNN convolutional neural networks
  • RNN recurrent neural networks
  • CNNs are neural networks specifically designed to process data with a grid-like structure. For example, time-series data (discrete sampling along the time axis) and image data (two-dimensional discrete sampling) can both be considered grid-like data.
  • CNNs do not use all the input information at once for computation; instead, they use a fixed-size window to extract a portion of the information for convolution operations, which significantly reduces the computational cost of model parameters.
  • each window can use different convolution kernels, allowing CNNs to better extract features from the input data.
  • RNNs are a type of distributed neural network (DNN) that utilizes feedback time-series information. Their input includes the current input value and their own output value from the previous time step. RNNs are well-suited for acquiring temporally correlated sequence features, and are particularly applicable to applications such as speech recognition and channel encoding/decoding.
  • DNN distributed neural network
  • FNN networks The characteristic of FNN networks is that neurons in adjacent layers are completely connected to each other, which makes FNNs typically require a large amount of storage space and result in high computational complexity.
  • the FNN, CNN, and RNN mentioned above are all constructed based on neurons.
  • each neuron performs a weighted summation operation on its input values, and the result of the weighted summation is used to generate the output through a nonlinear function.
  • the weights of the weighted summation operation of neurons in a neural network and the nonlinear function are called the parameters of the neural network.
  • the parameters of all neurons in a neural network constitute the parameters of that neural network.
  • a dataset refers to the data used for model training, validation, and testing in machine learning.
  • the quantity and quality of the data will affect the effectiveness of machine learning.
  • ground truth usually refers to data that is considered accurate or real.
  • Training datasets are used to train AI models.
  • a training dataset can include the input to the AI model, or it can include both the input and the target output of the AI model.
  • a training dataset includes one or more training data points, which can include training samples input to the AI model or the target output of the AI model.
  • the target output can also be referred to as a label, sample label, or labeled sample.
  • the label is the ground truth value.
  • training datasets can include simulation data collected through simulation platforms, experimental data collected from experimental scenarios, or measured data collected in actual communication networks. Because the geographical environment and channel conditions where the data is generated vary—for example, indoor/outdoor conditions, movement speed, frequency bands, or antenna configurations—the collected data can be categorized during acquisition. For instance, data with the same channel propagation environment and antenna configuration can be grouped together.
  • Model training essentially involves learning certain features from training data.
  • AI models such as neural network models
  • the goal is to make the model's output as close as possible to the desired predicted value. This is achieved by comparing the network's current predictions with the target value and updating the weight vector of each layer based on the difference. (Of course, there's usually an initialization process before the first update, where parameters are pre-configured for each layer.) For example, if the network's prediction is too high, the weight vector is adjusted to predict a lower value. This adjustment continues until the AI model can predict the target value or a value very close to it. Therefore, it's necessary to predefine "how to compare the difference between the predicted and target values," which is the loss function or objective function.
  • adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers of the neural network, the width, the weights of the neurons, or the parameters in the activation function of the neurons.
  • Inference data can be used as input to a trained AI model for inference.
  • the inference data is input into the AI model, and the corresponding output, which is the inference result, is obtained.
  • the design of an AI model mainly includes the data collection phase (e.g., collecting training data and/or inference data), the model training phase, and the model inference phase. It can also further include the application of the inference results.
  • the training processes of different models can be deployed on different devices or nodes, or on the same device or node.
  • the inference processes of different models can be deployed on different devices or nodes, or on the same device or node.
  • the terminal device sends the decoder's model parameters to the network device.
  • the network device can send the encoder's model parameters to the terminal device and the decoder's model parameters to the network device. Then, the model inference phase corresponding to the encoder is performed on the terminal device, and the model inference phase corresponding to the decoder is performed on the network device.
  • the model parameters can include one or more of the following: model structure parameters (e.g., number of layers, and/or weights), model input parameters (e.g., input dimension, number of input ports), or model output parameters (e.g., output dimension, number of output ports).
  • the input dimension refers to the size of an input data set; for example, if the input data is a sequence, the corresponding input dimension indicates the length of the sequence.
  • the number of input ports refers to the quantity of input data.
  • the output dimension refers to the size of an output data set; for example, if the output data is a sequence, the corresponding output dimension indicates the length of the sequence.
  • the number of output ports refers to the quantity of output data.
  • network devices determine one or more configurations, such as downlink data channel resources, MCS (Multi-Channel System), and precoding, for scheduling terminal devices based on channel information.
  • MCS Multi-Channel System
  • Channel information also known as channel state information (CSI) or channel environment information, is information that reflects channel characteristics and quality.
  • Channel information measurement refers to the process by which the receiver deciphers the channel information based on a reference signal transmitted by the transmitter; that is, it involves estimating the channel information using channel estimation methods.
  • the reference signal may include one or more of the following: channel state information reference signal (CSI-RS), synchronizing signal/physical broadcast channel block (SSB), sounding reference signal (SRS), or demodulation reference signal (DMRS).
  • CSI-RS channel state information reference signal
  • SSB synchronizing signal/physical broadcast channel block
  • SRS sounding reference signal
  • DMRS demodulation reference signal
  • One or more of CSI-RS, SSB, and DMRS can be used to measure downlink channel information.
  • SRS and/or DMRS can be used to measure uplink channel information.
  • Channel information can be determined based on channel measurement results of a reference signal.
  • channel information can be the channel measurement results of a reference signal.
  • the channel measurement results of the reference signal or the channel measurement results can also be replaced by channel information.
  • network devices need to obtain downlink CSI through uplink feedback from terminal devices.
  • Network devices typically send a downlink reference signal to the terminal device, which receives this signal. Since the terminal device knows the transmission information of the downlink reference signal, it can perform channel measurement and interference measurement based on the received signal to estimate the downlink channel traversed by the downlink reference signal. The terminal device then generates the downlink CSI based on this measurement and the resulting downlink channel matrix. The terminal device generates a CSI report according to a predefined protocol method or a network device configuration method and feeds it back to the network device so that it can obtain the downlink CSI.
  • CSI channel quality indicator
  • PMI precoding matrix indicator
  • RI rank indicator
  • CRI CSI-RS resource indicator
  • CRI CSI-RS resource indicator
  • channel response information such as channel response matrix, frequency domain channel response information, time domain channel response information
  • weight information corresponding to the channel response reference signal receiving power (RSRP), or signal to interference plus noise ratio (SINR).
  • RSRP reference signal receiving power
  • SINR signal to interference plus noise ratio
  • RI indicates the recommended number of downlink transmission layers for the receiving end of the reference signal, such as a terminal device
  • CQI indicates the modulation and coding schemes supported by the current channel conditions for the receiving end of the reference signal, such as a terminal device
  • PMI indicates the recommended precoding for the receiving end of the reference signal, such as a terminal device.
  • the number of precoding layers indicated by PMI corresponds to RI.
  • channel information can be obtained by measuring the reference signal. Compressing and/or quantizing this channel information yields feedback information.
  • This feedback information can be reported via a channel information report. Therefore, feedback information can also be referred to as a channel report.
  • a channel report may include at least one sub-channel report.
  • the channel information can be recovered by decompressing and/or inverting the feedback information. Decompression can also be understood as recovery or reconstruction, which will not be elaborated further below.
  • Feedback information can also be called channel information feedback information, CSI feedback information, CSI feedback information, compressed information, compressed channel information, compressed CSI information, compressed channel information, or compressed CSI, etc.
  • the recovered channel information can also be called CSI recovery information.
  • the number of supported antenna ports also increases, leading to a growth in the dimensionality of the corresponding channel matrix and precoding matrix.
  • the overhead of network devices transmitting reference signals increases.
  • the error in approximating large-scale channel and precoding matrices using a finite number of predefined codewords increases.
  • One method to improve channel recovery accuracy is to increase the number of codewords in the codebook; however, this simultaneously increases the overhead of CSI feedback (including the corresponding codeword number and one or more weighting coefficients), thereby reducing available resources for data transmission and causing system capacity loss.
  • AI-CSI feedback a CSI feedback method based on AI models, known as AI-CSI feedback.
  • Terminal devices use AI models to compress and feed back CSI
  • network devices use AI models to reconstruct the compressed CSI.
  • AI-based CSI feedback transmits a sequence (such as a bit sequence), resulting in lower overhead compared to traditional CSI feedback.
  • AI models possess stronger nonlinear feature extraction capabilities, enabling more effective compression and representation of channel information and more efficient channel reconstruction based on feedback information compared to traditional methods.
  • the encoder in Figure 5 can be a CSI generator, and the decoder can be a CSI reconstructor.
  • the encoder can be deployed in the terminal device, and the decoder can be deployed in the network device.
  • the channel information V is used by the encoder to generate CSI feedback information z.
  • the channel information is then reconstructed by the decoder to obtain the recovered channel information V’.
  • Channel information V can be obtained through channel information measurement.
  • channel information V may include the eigenvector matrix (a matrix composed of eigenvectors) of the downlink channel.
  • the encoder processes the eigenvector matrix of the downlink channel to obtain CSI feedback information z.
  • the compression and/or quantization operations on the eigenvector matrix based on the codebook in the correlation scheme are replaced by the operation of the encoder processing the eigenvector matrix to obtain CSI feedback information z.
  • the decoder processes the CSI feedback information z to obtain the recovered channel information V'.
  • Training data used to train an AI model includes training samples and sample labels.
  • training samples are channel information measured by the terminal device, and sample labels are the actual channel information, i.e., the ground truth CSI. If the encoder and decoder belong to the same autoencoder, the training data may only include training samples, or in other words, the training samples are the sample labels.
  • the true CSI can be a high-precision CSI.
  • the specific training process is as follows:
  • the model training node uses an encoder to process channel information, i.e., training samples, to obtain CSI feedback information, and uses a decoder to process the feedback information to obtain the recovered channel information, i.e., the CSI recovered information.
  • the difference between the CSI recovered information and the corresponding sample label is calculated, i.e., the value of the loss function.
  • the parameters of the encoder and decoder are updated according to the value of the loss function to minimize the difference between the recovered channel information and the corresponding sample label, i.e., minimizing the loss function.
  • the loss function can be the minimum mean square error (MSE) or cosine similarity.
  • the above model training node can be a terminal device, network device, or other network element with AI functionality in a communication system.
  • the AI model can be implemented in hardware circuits, software, or a combination of both.
  • Non-limiting examples of software include: program code, program, subroutine, instruction, instruction set, code, code segment, software module, application program, or software application, etc.
  • the configuration of reference signals sent by network devices to terminal devices is flexible and variable, which may cause performance fluctuations in AI models during channel information processing. For example, if the AI model on the terminal device side is an AI CSI compression model, it may lead to performance fluctuations in CSI feedback. Similarly, if the AI model on the terminal device side is an AI CSI prediction model, it may lead to performance fluctuations in CSI prediction.
  • this application aims to provide a communication method that can guarantee or improve the performance of AI models when network devices perform different resource configurations to terminal devices.
  • Figures 6 to 11 are illustrated using the example of the first device as a terminal device and the second device as a network device.
  • the terminal device is used to replace the first device
  • the network device is used to replace the second device.
  • Figure 6 is a schematic flowchart of a communication method 600 provided in an embodiment of this application. As shown in Figure 6, the method includes at least the following steps.
  • network devices determine the first resource allocation.
  • the network device sends a first instruction message to the terminal device, and the terminal device receives the first instruction message accordingly.
  • the first indication information indicates a first resource configuration, which is one of at least one resource configuration supported by the first AI model.
  • the terminal device After receiving the first resource configuration, the terminal device measures channel information based on the first resource configuration and processes the channel information according to the first AI model.
  • processing the channel information according to the first AI model includes: compressing the channel information according to the first AI model.
  • processing the channel information according to the first AI model includes: performing prediction processing on the channel information according to the first AI model, and then further compressing the channel information obtained from the prediction processing.
  • the AI CSI prediction followed by compression model can also be called an AI CSI prediction plus compression model or an AI CSI prediction + compression model; it should be understood that this application does not impose any limitations on this.
  • the method may further include: the terminal device determining a first channel report.
  • the terminal device determining a first channel report. This can be understood as the first channel report being determined by the terminal device after compressing the channel information obtained from the first resource configuration measurement based on the first AI model.
  • processing the channel information according to the first AI model includes: performing prediction processing on the channel information according to the first AI model.
  • the method may further include: the terminal device performing compression processing on the channel information obtained from the prediction processing according to a non-AI model.
  • resource configuration includes configuration type, offset of adjacent resources, or number of transmissions, wherein the configuration type includes at least one of the following: periodic configuration, semi-static configuration, or aperiodic configuration.
  • resource configuration can be CSI-RS configuration or configuration of other reference signals.
  • the network device configures the transmission period (e.g., every N slots) and offset (slot offset within the period) of the CSI-RS resources and informs the terminal device.
  • the transmission period of the CSI-RS resources can be understood as the offset between adjacent CSI-RS resources.
  • the network device configures the transmission period (e.g., every N slots) and offset (time slot offset within the period) of the CSI-RS resource and informs the terminal device.
  • the network device can activate or deactivate the transmission of the CSI-RS resource using configuration information such as the medium access control-control element (MAC-CE).
  • MAC-CE medium access control-control element
  • aperiodic CSI-RS configuration also supports configuring the transmission of one or more CSI-RS resources at a time. For instance, the network device configures this using a set of parameters [m, K], where m is the transmission period of the aperiodic CSI-RS resources (e.g., every m slots), and K is the number of CSI-RS resource transmissions.
  • the transmission period of the CSI-RS resources can be understood as the offset between adjacent CSI-RS resources, and the number of CSI-RS resource transmissions can be understood as the number of CSI-RS resources; these details will not be elaborated further below.
  • step S610 the network device determines the first resource configuration in several possible ways.
  • the network device obtains at least one resource configuration supported by the first AI model through protocol predefinition, and then the network device selects a first resource configuration from the at least one resource configuration.
  • the network device obtains not only at least one resource configuration supported by a first AI model through protocol predefinition, but also at least one resource configuration supported by a second AI model through protocol predefinition. Subsequently, the network device selects a first resource configuration from all the obtained resource configurations.
  • the second AI model can be understood as one or more AI models different from the first AI model.
  • the method may further include: the terminal device sending first information to the network device, and correspondingly, the network device receiving the first information, wherein the first information is associated with at least one resource configuration.
  • the first information includes at least one resource configuration supported by the first AI model.
  • the terminal device sends first information to the network device, which includes at least one resource configuration supported by the first AI model on the terminal device side. That is to say, the terminal device reports at least one resource configuration supported by the first AI model to the network device.
  • AI model #1 For example, if there is only one AI model (e.g., AI model #1) on the terminal device side, the terminal device reports at least one resource configuration supported by AI model #1. For example, as shown in Table 1, AI model #1 supports the following resource configurations.
  • the AI model #1 on the terminal device side supports four resource configurations (Resource Configuration #1 to Resource Configuration #4).
  • Resource Configuration #1 is a periodic CSI-RS configuration
  • Resource Configuration #2 is a semi-static CSI-RS configuration
  • Resource Configurations #3 and #4 are aperiodic CSI-RS configurations.
  • the offset of the adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #3 is m1
  • the number of CSI-RS transmissions is K1.
  • the offset of the adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #4 is m2, and the number of CSI-RS transmissions is K2.
  • the network device After receiving the above four resource configurations, the network device selects one resource configuration (i.e., the first resource configuration) from the four resource configurations and sends it to the terminal device.
  • one resource configuration i.e., the first resource configuration
  • the first information also includes at least one resource configuration supported by the second AI model.
  • the second AI model can be understood as one or more AI models different from the first AI model.
  • the terminal device can also report at least one resource configuration supported by all AI models. It should be noted that in this case, the terminal device reports multiple AI models in multiple installments. Assuming there are three AI models on the terminal device side, in one scenario, the terminal device reports the resource configurations supported by each AI model in three separate reports, with each report occurring at a different time. For instance, the terminal device reports the resource configuration supported by AI model #1 at the first time, AI model #2 at the second time, and AI model #3 at the third time. It should be understood that the first, second, and third times are different. Therefore, the network device can determine the different AI models and the resource configurations supported by each AI model based on the resource configurations received at different times. For example, the network device might determine that the resource configuration received at the first time is supported by AI model #1.
  • the terminal device reports different resource configuration indications for each AI model; in other words, the AI models can report different resource configurations corresponding to different AI models through different indications. For instance, the terminal device reports the resource configuration supported by AI model #1 through indication #A, the terminal device reports the resource configuration supported by AI model #2 through indication #B, and the terminal device reports the resource configuration supported by AI model #3 through indication #C. It should be understood that indications #A, #B, and #C are different. Then, the network device can determine different AI models and the resource configurations supported by different AI models based on the different indications received. For example, the network device can determine that the resource configuration received through indication #A is supported by AI model #1.
  • each AI model reported by the terminal device in multiple reports can also be associated with other information (other than the above-mentioned indication information), and this application does not impose any restrictions on this.
  • AI Models #1 to #3 support the following resource configurations respectively.
  • AI model #1 on the terminal device side supports four resource configurations (resource configuration #1-resource configuration #4), AI model #2 supports three resource configurations (resource configuration #1-resource configuration #3), and AI model #3 also supports three resource configurations (resource configuration #1-resource configuration #3).
  • the relevant descriptions of the resource configurations for AI model #1 can be found above and will not be repeated here.
  • the offset of adjacent resources for the aperiodic configuration CSI-RS is m3, and the number of times CSI-RS resources are sent is K3.
  • the network device selects a resource configuration (i.e., the first resource configuration) from the resource configurations and sends it to the terminal device.
  • a resource configuration i.e., the first resource configuration
  • AI models #1, #2, and #3 mentioned above are only for the purpose of illustrating that these are three different AI models and are not related to the identification information of the three AI models.
  • the first information includes identification information of the first AI model.
  • This identification information identifies the first AI model; for instance, it can be a number sequence identifier, an letter sequence identifier, or a dataset identifier, indicating that the first AI model was trained on a specific dataset.
  • a device may have multiple datasets, and the device can train multiple AI models based on these datasets.
  • the dataset identifier can indicate the dataset used by a particular AI model. As an example, different datasets can be identified using different identifiers.
  • identification information of the first AI model can be randomly generated or predefined, and it should be understood that this application does not impose any restrictions on this.
  • identification information of the first AI model mentioned above is merely an example, and this application does not impose any limitations on it.
  • the network device pre-defines the correspondence between the identification information of the first AI model (e.g., identifier 1) and at least one resource configuration through a protocol predefined structure, as shown in Table 3.
  • the terminal device sends the identification information of the first AI model (e.g., identifier 1) to the network device.
  • the network device determines at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.
  • the correspondence can also be referred to as an association relationship, etc., and is not limited here.
  • the AI model corresponding to identifier 1 supports four resource configurations (Resource Configuration #1 to Resource Configuration #4).
  • Resource Configuration #1 is a periodic CSI-RS configuration
  • Resource Configuration #2 is a semi-static CSI-RS configuration
  • Resource Configurations #3 and #4 are aperiodic CSI-RS configurations.
  • the offset of adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #3 is m4, and the number of CSI-RS transmissions is K4.
  • the offset of adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #4 is m5, and the number of CSI-RS transmissions is K5.
  • the network device can also pre-define the identification information of other AI models on the terminal device side and the correspondence between them and at least one resource configuration through protocol pre-definition.
  • the network device can also pre-define the identification information of a second AI model and the correspondence between it and at least one resource configuration supported by the second AI model.
  • the second AI model can be understood as one or more AI models different from the first AI model.
  • the identification information of the second AI model is used to identify the second AI model.
  • the identification information of the second AI model can be a number sequence identifier, an letter sequence identifier, or a dataset identifier, where the dataset identifier indicates that the second AI model was trained on a specific dataset.
  • a device may have multiple datasets, and the device can train multiple AI models based on these datasets.
  • the dataset identifier indicates the dataset used by a particular AI model. As an example, different datasets can be identified using different identifiers.
  • identification information of the second AI model can be randomly generated or predefined, and it should be understood that this application does not impose any restrictions on this.
  • identification information of the second AI model mentioned above is merely an example, and this application does not impose any limitations on it.
  • the AI model corresponding to identifier 1 supports 4 resource configurations (resource configuration #1 to resource configuration #4), the AI model corresponding to identifier 2 supports 3 resource configurations (resource configuration #1 to resource configuration #3), and the AI model corresponding to identifier 3 supports 3 resource configurations (resource configuration #1 to resource configuration #3).
  • the description of the resource configurations supported by the AI model corresponding to identifier 1 can be found above and will not be repeated here.
  • the AI model corresponding to identifier 2 supports no restrictions on the offset of adjacent resources in the aperiodic CSI-RS configuration and the number of CSI-RS resource transmissions.
  • the AI model corresponding to identifier 3 supports an offset of m6 for adjacent resources in the aperiodic CSI-RS configuration and a transmission count of K6 for the CSI-RS resource.
  • the terminal device sends the identification information (e.g., identifier 1) of the first AI model to the network device. Based on the received identifier 1 and the correspondence between the identification information of all AI models on the terminal device side obtained by the protocol predefined and at least one resource configuration, the network device determines at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.
  • the identification information e.g., identifier 1
  • the network device determines at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.
  • the method may further include: the terminal device sending second indication information to the network device, and the network device receiving the second indication information accordingly.
  • the second indication information indicates the correspondence between the identification information of the first AI model (e.g., identifier 1) and at least one resource configuration, as shown in Table 3.
  • the network device After receiving the second indication information and the identification information of the first AI model (e.g., identifier 1), the network device further determines, based on the identification information of the first AI model reported by the terminal device (i.e., identifier 1) and the second indication information, at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.
  • the network device After receiving the second indication information and the identification information of the first AI model (e.g., identifier 1), the network device further determines, based on the identification information of the first AI model reported by the terminal device (i.e., identifier 1) and the second indication information, at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (
  • the second indication information may also be used to indicate the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.
  • the second indication information indicates the correspondence between the AI model corresponding to identifier 1 and at least one resource configuration, the correspondence between the AI model corresponding to identifier 2 and at least one resource configuration, and the correspondence between the AI model corresponding to identifier 3 and at least one resource configuration. That is, the second indication information simultaneously indicates the correspondence between the identifier information of three AI models and their corresponding at least one resource configuration.
  • the network device determines at least one resource configuration supported by the first AI model based on identifier 1 and the second indication information, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.
  • the resource configuration e.g., the first resource configuration
  • the terminal device sending the second indication information to the network device as described above can be replaced by the terminal device sending second information to the network device, which includes the second indication information.
  • the terminal device can report the second indication information to the network device through the second information.
  • the second information may be model-related information, or it may be registration request information, or it may be terminal device capability information. It should be understood that the above are merely illustrative examples, and this application does not impose any limitations on them.
  • the second information when the second information is model-related information, it can be understood as the second instruction information exchanged by the terminal device during the bilateral model docking phase.
  • the second information when the second information is a registration request message, it can be understood as the second instruction information exchanged by the terminal device during the phase of requesting access to the network device; that is, the terminal device sends a registration request message to the network device requesting registration with the network, and this registration request message includes the second instruction information.
  • the second information when the second information is the terminal device's capability information, it can be understood as the terminal device reporting its own capability information to the network device while simultaneously reporting the second instruction information.
  • the first information mentioned above also includes functional information of the first AI model.
  • the terminal device sends first information to the network device, and the network device receives the first information accordingly.
  • This first information also includes functional information of the first AI model.
  • This functional information can be time-domain channel state information (CSI) prediction followed by compression, spatial and frequency-domain CSI compression, or spatial, frequency, and time-domain CSI compression, or time-domain CSI prediction, or beamforming time-domain prediction.
  • CSI channel state information
  • the functional information of the first AI model can be understood as the problems that the first AI model can solve, or the tasks that the first AI model can perform.
  • the first AI model can be used for compression of time-domain channel state information (CSI) prediction.
  • CSI time-domain channel state information
  • several historical CSIs or the current CSIs can be input into the first AI model, and the output predicted CSIs for one or more future times can be compressed.
  • the first AI model can be used for CSI compression in both the spatial and frequency domains, such as jointly compressing the frequency and spatial information corresponding to the first resource configuration based on the first AI model.
  • the first AI model can be used for CSI compression in the spatial, frequency, and time domains, such as jointly compressing the frequency, spatial, and time domain information corresponding to the first resource configuration based on the first AI model.
  • the first AI model can be used for CSI prediction of time-domain channel state information. For instance, several historical CSIs or the current CSIs can be input into the first AI model, and the predicted CSIs for one or more future moments can be output.
  • the first AI model can be used for beam temporal prediction. For instance, it can input first channel information into the first AI model and output second channel information (or predicted channel information).
  • the first channel information is the RSRP in the first beam set
  • the second channel information is the RSRP in the second beam set or the identifiers of the optimal K beams, etc.
  • the first and second beam sets can be the same or different.
  • the optimal K beams can be the K beams in the second beam set whose channel quality (e.g., RSRP, SINR) is greater than a certain threshold, where K is a positive integer.
  • the terminal device sends the first information to the network device.
  • This first information may include the identification information of the first AI model, the functional information of the first AI model, and the resource configuration supported by the first AI model.
  • the identification information of the first AI model is illustrated below as Identifier 1.
  • the functional information of the AI model corresponding to Identifier 1 includes the time-domain CSI prediction and compression function information, and the resource configurations supported by the AI model corresponding to Identifier 1 (resource configuration #1 to resource configuration #4).
  • resource configurations supported by the AI model corresponding to Identifier 1 please refer to the previous text, which will not be repeated here.
  • the first information mentioned above may further include functional information of the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.
  • the second AI model can be understood as the terminal device sending functional information of all AI models supported by the terminal device to the network device. As shown in Table 6, this is illustrated using an example of the terminal device supporting five AI models.
  • the AI model corresponding to identifier 1 provides time-domain CSI prediction and compression functionality; the AI model corresponding to identifier 2 provides spatial and frequency-domain CSI compression functionality; the AI model corresponding to identifier 3 provides spatial, frequency, and time-domain CSI compression functionality; the AI model corresponding to identifier 4 provides time-domain CSI prediction functionality; and the AI model corresponding to identifier 5 provides beamforming time-domain prediction functionality.
  • the AI model corresponding to identifier 4 supports an offset of m7 for adjacent CSI-RS resources with an aperiodic configuration and a transmission count of K7 for CSI-RS resources;
  • the AI model corresponding to identifier 5 supports an offset of m8 for adjacent CSI-RS resources with an aperiodic configuration and a transmission count of K8 for CSI-RS resources. It should be noted that the descriptions of the resource configurations supported by the AI models corresponding to identifiers 1 to 3 shown in Table 6 can be found above and will not be repeated here.
  • Tables 1 to 6 are merely illustrative examples.
  • the functional information of the AI model corresponding to identifier 1 in Table 6 may be spatial and frequency domain CSI compression functional information
  • the functional information of the AI model corresponding to identifier 2 may be time domain CSI compression functional information, etc. This application does not impose any restrictions on this.
  • Figure 7 is a schematic flowchart of a communication method 700 provided in an embodiment of this application. As shown in Figure 7, the method includes at least the following steps.
  • the method includes: S710, the network device sends a second resource configuration to the terminal device, and correspondingly, the terminal device receives the second resource configuration.
  • the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.
  • the third indication information instructs the second channel report to be associated with a third AI model.
  • the second channel report is generated by processing channel information measured from the second resource configuration based on the third AI model. Specifically, after receiving the second resource configuration, the terminal device measures the channel information based on the second resource configuration and processes this channel information using the third AI model. Subsequently, the terminal device sends the third indication information to the network device, instructing the second channel report to be associated with the third AI model, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.
  • the channel information is processed based on the third AI model, including: performing prediction processing on the channel information based on the third AI model, and then further compressing the channel information obtained from the prediction processing.
  • processing the channel information based on the third AI model includes: compressing the channel information based on the third AI model.
  • association can be understood as the second channel report being generated based on the processing of the third AI model.
  • association can be replaced with “correspondence”, and this application embodiment does not limit this.
  • the third indication information can be the identification information of the third AI model. Specifically, after receiving the identification information of the third AI model, the network device knows that the second channel report is generated based on the third AI model, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.
  • the method may further include: the network device determining a second resource configuration.
  • the description of the network device determining the second resource configuration can also refer to the relevant description of the network device determining the first resource configuration described above; for simplicity, it will not be repeated here.
  • the method may further include: the network device sending fourth indication information to the terminal device, and correspondingly, the terminal device receiving the fourth indication information.
  • the fourth indication information specifies the configuration information for the second channel report, including its maximum feedback overhead. Specifically, after receiving the fourth indication information, the terminal device learns the maximum feedback overhead of the second channel report, selects the third AI model, processes the channel information obtained from the second resource configuration measurement based on the third AI model, and generates the second channel report.
  • the configuration information for the second channel report may include a maximum feedback overhead of M bits.
  • the method may further include: the terminal device determining information for the second channel report based on the second resource configuration. Specifically, the terminal device measures channel information based on the second resource configuration and processes the channel information based on a third AI model to generate the second channel report.
  • the information reported by the second channel includes at least one of the following:
  • the number of bits corresponding to each resource information in one or more measurement resources is not limited.
  • the terminal device has multiple AI models, and the information in the channel reports generated based on different AI models is different.
  • Table 7 the example of three AI models on the terminal device side will be used for illustration.
  • the AI model corresponding to identifier 6 has a time-domain prediction and compression function
  • the AI model corresponding to identifier 7 has a spatial and frequency-domain information compression function
  • the AI model corresponding to identifier 8 has a time-domain prediction and spatial, frequency, and time-domain information compression function. Therefore, the channel reports generated by these three AI models based on the channel information obtained from the second resource configuration measurement are also different, as shown in Table 8.
  • the channel report generated based on the AI model corresponding to identifier 6 includes: a predicted CSI for 4 slots, meaning it includes the time-domain information corresponding to the measurement resource for 4 slots; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources, which is M/4.
  • the channel report generated based on the AI model corresponding to identifier 7 includes: a predicted CSI for 1 slot, meaning it includes the time-domain information corresponding to the measurement resource for 1 slot; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources, which is M.
  • the channel report generated based on the AI model corresponding to identifier 8 includes: a predicted CSI for the currently measured slot, meaning it includes the time-domain information corresponding to the measurement resource for the currently measured slot; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources.
  • the third AI model can also be the first AI model described above, and correspondingly, the second channel report can also be the first channel report described above, and the second resource configuration can be the first resource configuration described above.
  • the embodiments shown in Figures 6 and 7 can be used in combination under certain circumstances.
  • Figure 8 is a schematic flowchart of a communication method 800 provided by an embodiment of this application, which includes at least the following steps.
  • the network device determines the second resource configuration.
  • Step S810 is similar to step S610, and will not be described in detail here.
  • the network device sends the second resource configuration, and the terminal device receives the second resource configuration accordingly.
  • the terminal device processes the channel information obtained from the second resource configuration measurement based on an AI model (e.g., AI model #3) to generate a second channel report.
  • an AI model e.g., AI model #3
  • the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.
  • step S830 is similar to step S720, and will not be described in detail here.
  • the method may further include S840, in which the network device sends fourth indication information to the terminal device, and correspondingly, the terminal device receives the fourth indication information.
  • step S840 can be referenced to the relevant description of the step before step S720 above, in which the network device sends the fourth indication information to the terminal device, and will not be repeated here.
  • Figure 9 is a schematic flowchart of a communication method 900 provided in an embodiment of this application. As shown in Figure 9, the method includes at least the following steps.
  • the terminal device sends first information to the network device, and the network device receives the first information accordingly.
  • the first information includes at least one resource configuration supported by the first AI model.
  • the terminal device sends first information to the network device.
  • This first information includes at least one resource configuration supported by the first AI model on the terminal device side. That is, the terminal device reports at least one resource configuration supported by the first AI model to the network device. It should be noted that a detailed description of the at least one resource configuration supported by the first AI model reported by the terminal device can be found in the relevant description in Table 1, and will not be repeated here.
  • the first information may also include the identification information of the first AI model. That is, in this case, the first information may include the identification information of the first AI model and at least one resource configuration supported by the first AI model.
  • the specific content of the first information can be found in Table 3, and will not be elaborated here.
  • the first information may also include functional information of the first AI model. That is, in this case, the first information may include the identification information of the first AI model, the functional information of the first AI model, and at least one resource configuration supported by the first AI model.
  • the specific content of the first information can be found in Table 5, and will not be elaborated here.
  • the first information may further include at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.
  • the terminal device sends first information to the network device.
  • This first information includes at least one resource configuration supported by a first AI model on the terminal device side, and at least one resource configuration supported by other AI models (second AI models) on the terminal device side. That is, the terminal device reports to the network device at least one resource configuration supported by the first AI model and at least one resource configuration supported by other AI models. Optionally, in some cases, the terminal device reports to the network device at least one resource configuration supported by all AI models on the terminal device side.
  • the first information may further include the identification information of the first AI model and the identification information of the second AI model. That is, in this case, the first information may include the identification information of the first AI model, at least one resource configuration supported by the first AI model, the identification information of the second AI model, and at least one resource configuration supported by the second AI model.
  • the specific content of the first information can be found in Table 4, and will not be elaborated here.
  • the first information may further include functional information of the first AI model and functional information of the second AI model. That is, in this case, the first information may include the identification information of the first AI model, the functional information of the first AI model, at least one resource configuration supported by the first AI model, the identification information of the second AI model, the functional information of the second AI model, and at least one resource configuration supported by the second AI model.
  • the specific content of the first information can be found in Table 6, and will not be elaborated here.
  • the network device sends a first resource configuration to the terminal device, and the terminal device receives the first resource configuration accordingly.
  • the network device determines at least one resource configuration supported by the AI model on the terminal device side, and selects one resource configuration from the at least one resource configuration supported by the AI model on the terminal device side and sends it to the terminal device. For example, the network device selects a first resource configuration to send to the terminal device, wherein the first resource configuration is one of the at least one resource configuration supported by the first AI model. In other words, the terminal device selects a first resource configuration from the at least one resource configuration supported by the first AI model and sends it to the terminal device.
  • the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.
  • the terminal device performs measurements based on the received first resource configuration to obtain channel information.
  • the terminal device processes the channel information based on the first AI model to obtain the first channel report.
  • the first AI model can be either an AI CSI prediction post-compressed model or an AI CSI compressed model.
  • the terminal device processes the channel information based on the first AI model, including: the terminal device compresses the channel information based on the first AI model. That is, in this example, the first channel report is generated based on the compression processing of the channel information using the first AI model.
  • the terminal device processes the channel information based on the first AI model, including: the terminal device performs prediction processing on the channel information based on the first AI model, and then further compresses the channel information obtained from the prediction processing. That is, in this example, the first channel report is generated based on the prediction processing of the channel information using the first AI model, followed by further compression processing of the channel information obtained from the prediction processing.
  • the terminal device sends a first channel report to the network device, and the network device receives the first channel report accordingly.
  • the network device decompresses the first channel report based on AI model*1 to obtain the recovered channel information.
  • AI model *1 is matched with the first AI model, or in other words, AI model *1 and the first AI model are in one-to-one correspondence.
  • the terminal device reports at least one resource configuration supported by the AI model on the terminal device side to the network device, which can guarantee or improve the performance of the AI model when the network device makes different resource configurations to the terminal device, and further guarantee the feedback performance of CSI.
  • FIG 10 is a schematic flowchart of a communication method 1000 provided in an embodiment of this application. As shown in Figure 10, the method includes at least the following steps. It should be noted in advance that steps S1010 to S1040 are similar to steps S910 to S940 described above. For simplicity, only the differences from the embodiment shown in Figure 9 above will be described below, and will not be repeated here.
  • the terminal device predicts the channel information based on the first AI model to obtain the predicted channel information.
  • the first AI model can be either an AI CSI prediction model or an AI beam temporal prediction model.
  • the terminal device reports at least one resource configuration supported by the AI model on the terminal device side to the network device, which can guarantee or improve the predictive performance of the AI model when the network device makes different resource configurations to the terminal device.
  • Figure 11 is a schematic flowchart of a communication method 1100 provided in an embodiment of this application. As shown in Figure 11, the method includes at least the following steps.
  • the network device sends the second resource configuration to the terminal device, and the terminal device receives the second resource configuration accordingly.
  • the network device sends the configuration information of the second channel report to the terminal device, and the terminal device receives the configuration information of the second channel report accordingly.
  • the configuration information of the second channel report includes the maximum feedback overhead of the second channel report. After receiving the configuration information of the second channel report, the terminal device knows the maximum feedback overhead of the second channel report.
  • the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.
  • the terminal device performs measurements based on the second resource configuration to obtain channel information.
  • the terminal device processes the channel information based on the third AI model to obtain the second channel report.
  • the terminal device learns the maximum feedback cost of the second channel report, it selects a third AI model based on the maximum feedback cost of the second channel report, and processes the channel information obtained from the second resource configuration measurement based on the third AI model to generate the second channel report.
  • the third AI model can be an AI CSI prediction-compressed model or an AI CSI compressed model.
  • the terminal device compresses the channel information using the third AI model and then generates a second channel report.
  • the terminal device when the first AI model is an AI CSI prediction and compression model, the terminal device performs prediction processing on the channel information using the first AI model, and then further compresses the channel information obtained from the prediction processing to generate a second channel report.
  • the first AI model is an AI CSI prediction and compression model
  • the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.
  • the third instruction information instructs the second channel report to be associated with the third AI model.
  • the terminal device sends the third instruction information to the network device, instructing the second channel report to be associated with the third AI model.
  • the network device After receiving the third instruction information and the second channel report, the network device knows that the second channel report is generated based on the third AI model. This allows the network device to use the AI model corresponding to the third AI model to decompress the second channel report and obtain the recovered channel information.
  • the network device decompresses the second channel report based on AI model*2 to obtain the recovered channel information.
  • AI Model *2 is matched with the third AI model, or in other words, AI Model *2 corresponds one-to-one with the third AI model.
  • the terminal device may have multiple AI models, and the information in the second channel report generated based on different AI models will be different. For example, specific descriptions can be found in Tables 7 and 8 above, which will not be repeated here.
  • the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model, which can guarantee or improve the performance of the AI model when the network device makes different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.
  • Figures 6 to 11 illustrate the scenario where the first device is the terminal device and the second device is the network device.
  • the following description, in conjunction with Figures 12 to 14, illustrates scenarios where the first and second devices serve as AI entities for the terminal device and network device, respectively.
  • the AI entity on the terminal device side can be the terminal device itself or an AI entity serving the terminal device, such as a server, like an OTT server or a cloud server.
  • the AI entity on the network device side can be the network device itself or an AI entity serving the network device, such as RAN, RIC, OAM, or a server, like a cloud server.
  • the AI entity on the network device can also be replaced by intelligent network elements, etc., without limitation. It should be noted that intelligent network elements can also be understood as network devices with AI capabilities, applicable in both O-RAN and non-O-RAN architectures, without limitation.
  • Figure 12 is a schematic flowchart of another communication method 1200 according to an embodiment of this application.
  • the first device is an OTT
  • the second device is a near real-time RIC
  • the method includes:
  • the OTT sends the first information to the terminal device, and the terminal device receives the first information accordingly.
  • the terminal device sends the first information to the network device, and the network device receives the first information accordingly.
  • the network device sends the first information to the near real-time RIC, and correspondingly, the near real-time RIC receives the first information.
  • the network device sends the first resource configuration to the terminal device, and the terminal device receives the first resource configuration accordingly.
  • the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.
  • the terminal device performs measurements based on the received first resource configuration to obtain channel information.
  • steps S1204 to S1206 please refer to steps S920 to S940, which will not be repeated here.
  • the terminal device sends channel information to the OTT, and the OTT receives the channel information accordingly.
  • OTT processes the channel information based on the first AI model to obtain the first channel report.
  • step S1208 For a description of step S1208, please refer to the description of step S950 above, which will not be repeated here.
  • the OTT sends a first channel report to the terminal device, and the terminal device receives the first channel report accordingly.
  • the terminal device sends a first channel report to the network device, and the network device receives the first channel report accordingly.
  • the network device sends the first channel report to the near real-time RIC, and correspondingly, the near real-time RIC receives the first channel report.
  • the near real-time RIC decompresses the first channel report based on AI model*1 to obtain the recovered channel information.
  • AI model *1 is matched with the first AI model, or in other words, AI model *1 and the first AI model are in one-to-one correspondence.
  • FIG 13 is a schematic flowchart of another communication method 1300 according to an embodiment of this application.
  • the first device is an OTT and the second device is a near real-time RIC.
  • the method includes the following steps. It should be noted in advance that steps S1301 to S1307 are similar to steps S1201 to S1207 described above. For simplicity, only the differences from the embodiment shown in Figure 12 above will be described below, and will not be repeated here.
  • OTT predicts channel information based on the first AI model to obtain the predicted channel information.
  • the first AI model can be either an AI CSI prediction model or an AI beam temporal prediction model.
  • FIG 14 is a schematic flowchart of another communication method 1400 according to an embodiment of this application.
  • the first device is an OTT
  • the second device is a near real-time RIC
  • the method includes:
  • the network device sends the second resource configuration to the terminal device, and the terminal device receives the second resource configuration accordingly.
  • the network device sends the configuration information of the second channel report to the terminal device, and the terminal device receives the configuration information of the second channel report accordingly.
  • step S1402 For a description of S1402, please refer to the description of step S1120 above, which will not be repeated here.
  • the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.
  • the terminal device performs measurements based on the second resource configuration to obtain channel information.
  • the terminal device sends channel information to the OTT, and the OTT receives the channel information accordingly.
  • OTT processes the channel information based on the third AI model to obtain the second channel report.
  • step S1406 For a description of step S1406, please refer to the description of step S1150 above, which will not be repeated here.
  • the OTT sends a second channel report and a third indication information to the terminal device, and the terminal device receives the second channel report and the third indication information accordingly.
  • the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.
  • steps S1407 and S1408 please refer to the description of step S1160 above, which will not be repeated here.
  • the network device sends a second channel report and a third indication information to the near real-time RIC, and the near real-time RIC receives the second channel report and the third indication information accordingly.
  • the near real-time RIC decompresses the second channel report based on AI model*2 to obtain the recovered channel information.
  • AI Model *2 is matched with the third AI Model, or in other words, AI Model *2 corresponds one-to-one with the third AI Model.
  • both the first device and the second device may include hardware structures and/or software modules, implementing the aforementioned functions in the form of hardware structures, software modules, or a combination of hardware structures and software modules. Whether a particular function is executed in the form of hardware structures, software modules, or a combination of hardware structures and software modules depends on the specific application and design constraints of the technical solution.
  • FIG. 15 is a schematic block diagram of a communication device according to an embodiment of this application.
  • the communication device includes a processing circuit 1510 and a transceiver circuit 1520, which can be interconnected or coupled to each other, for example, through a bus 1530.
  • the communication device can be a first device or a second device.
  • the communication device may also include a memory 1540.
  • the memory 1540 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.
  • RAM random access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • CD-ROM compact disc read-only memory
  • the processing circuit 1510 may be all or part of the processing or control circuitry of one or more processors, or it may be one or more processors.
  • the processor may be a central processing unit (CPU). If the processing circuit 1510 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.
  • the processing circuit 1510 may be a signal processor, a chip, or other integrated circuit that can implement the method of this application, or a portion of the aforementioned processor, chip, or integrated circuit used for processing functions.
  • the transceiver circuit 1520 can also be a transceiver, or an input/output interface.
  • An input/output interface is used for the input or output of signals or data and can also be called an input/output circuit.
  • the transceiver circuit 1520 is configured to perform the following operations: receive first indication information, which indicates a first resource configuration, which is one of at least one resource configuration supported by a first artificial intelligence (AI) model.
  • first indication information which indicates a first resource configuration, which is one of at least one resource configuration supported by a first artificial intelligence (AI) model.
  • AI artificial intelligence
  • the processing circuit 1510 is used to determine a first channel report, which is determined based on the channel information obtained by the first AI model from the measurement of the first resource configuration.
  • the processing circuit 1510 is configured to perform the following operations: determine a first resource configuration, which is one of at least one resource configuration supported by a first AI model, the first AI model being configured to process channel information measured based on the first resource configuration.
  • the transceiver circuit 1520 is configured to perform the following operations: send a first indication message that indicates the first resource configuration.
  • the communication device is the first device or the second device, it will be responsible for executing the methods or steps related to the first device or the second device in the foregoing method embodiments.
  • the transceiver circuit 1520 can be a transceiver or an interface circuit.
  • the transceiver circuit 1520 can be an input/output circuit.
  • FIG 16 is another schematic block diagram of a communication device according to an embodiment of this application.
  • This communication device can be a first device or a second device, used to implement the methods involved in the above embodiments.
  • the communication device includes a transceiver unit 1610 and a processing unit 1620.
  • the transceiver unit 1610 may include a sending unit and a receiving unit.
  • the sending unit is used to perform the sending action of the communication device, and the receiving unit is used to perform the receiving action of the communication device.
  • the sending unit and the receiving unit are combined into one transceiver unit in this embodiment. This will be explained uniformly here and will not be repeated later.
  • the transceiver unit 1610 is used to receive first instruction information, etc.; the processing unit 1620 is used to execute the processing, control, and other steps involved in the first device. For example, the processing unit 1620 is used to determine a first channel report, etc.
  • the transceiver unit 1610 is used to send first instruction information, etc.; the processing unit 1620 is used to execute the contents of the second device involving processing, control, etc. For example, the processing unit 1620 is used to determine a first resource configuration, etc.
  • the communication device When the communication device is the first device or the second device, it will be responsible for executing one or more of the methods or steps related to the first device or the second device in the foregoing method embodiments.
  • the communication device further includes a storage unit 1630 for storing programs or code for performing the aforementioned methods.
  • the transceiver unit in Figure 16 can correspond to the transceiver circuit in Figure 15, and the processing unit in Figure 16 can correspond to the processing circuit in Figure 15.
  • This application also provides a chip, including a processor, for calling and executing instructions stored in a memory, causing a communication device on which the chip is installed to perform the methods described in the examples above.
  • the memory may be integrated within the chip or located externally.
  • This application also provides another chip, including: an input interface, an output interface, and a processing circuit.
  • the input interface, the output interface, and the processor are connected via an internal connection path.
  • the processing circuit is used to execute code in a memory. When the code is executed, the processing circuit is used to execute the methods in the examples described above.
  • the chip also includes a memory for storing computer programs or code.
  • the input interface and the output interface can be independent of each other, or they can be integrated into a single input/output interface.
  • the processing circuitry can be all or part of the processing circuitry in one or more processors, or one or more processors.
  • This application also provides a processor for coupling with a memory for performing the methods and functions involving the first or second apparatus in any of the above embodiments.
  • a computer program product containing instructions is provided, which, when run on a computer, enables the implementation of the methods described in the foregoing embodiments.
  • This application also provides a computer program that, when run on a computer, enables the implementation of the methods described in the foregoing embodiments.
  • a computer-readable storage medium which stores a computer program that, when executed by a computer, implements the methods described in the foregoing embodiments.
  • the processor can be a central processing unit (CPU), but it can also be other general-purpose processors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or any conventional processor.
  • the memory in the embodiments of this application can be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory can be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory.
  • the volatile memory can be random access memory (RAM), which is used as an external cache.
  • RAM random access memory
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous DRAM
  • SLDRAM synchronous linked DRAM
  • DR RAM direct rambus RAM
  • the above embodiments can be implemented, in whole or in part, by software, hardware, firmware, or any other combination thereof.
  • the above embodiments can be implemented, in whole or in part, as a computer program product.
  • a computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of this application are generated.
  • the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
  • the computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired or wireless (e.g., infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more sets of available media.
  • the available medium can be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium.
  • a semiconductor medium can be a solid-state drive.
  • the disclosed systems, devices, and methods can be implemented in other ways.
  • the device embodiments described above are merely illustrative; for example, the division of units is merely a logical functional division, and in actual implementation, there may be other division methods.
  • multiple units or components may be combined or integrated into another system, or some features may be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces; the indirect coupling or communication connection of devices or units may be electrical, mechanical, or other forms.
  • the units described as separate components may or may not be physically separate.
  • the components shown as units may or may not be physical units; that is, they may be located in one place or distributed across multiple network units. Some or all of the units can be selected to achieve the purpose of this embodiment according to actual needs.
  • the functional units in the various embodiments of this application can be integrated into one processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit. If the above functions are implemented as software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of this application, in essence, or the part that contributes to the prior art, or part of the technical solution, can be embodied in the form of a software product.
  • This computer software product is stored in a storage medium and includes several instructions to cause a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of this application.
  • the aforementioned storage medium includes various media capable of storing program code, such as USB flash drives, portable hard drives, read-only memory, random access memory, magnetic disks, or optical disks.

Landscapes

  • Mobile Radio Communication Systems (AREA)

Abstract

The present application provides a communication method, the method comprising: a terminal device receiving first indicating information sent by a network device, wherein the first indicating information indicates a first resource configuration, the first resource configuration is one of at least one resource configuration supported by a first artificial intelligence (AI) model, and the first AI model is configured for processing channel information obtained on the basis of the first resource configuration measurement. The described technical solution ensures or improves the performance of the first AI model when the network device performs different resource configurations for the terminal device.

Description

一种通信方法和通信装置A communication method and a communication device

本申请要求于2024年05月10日提交中国专利局、申请号为202410579828.0、发明名称为“一种通信方法和通信装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to Chinese Patent Application No. 202410579828.0, filed on May 10, 2024, entitled "A Communication Method and Communication Device", the entire contents of which are incorporated herein by reference.

技术领域Technical Field

本申请涉及通信技术领域,更具体地,涉及一种通信方法和通信装置。This application relates to the field of communication technology, and more specifically, to a communication method and a communication device.

背景技术Background Technology

在通信系统中,网络设备根据下行信道状态信息(channel state information,CSI)确定调度终端设备的下行数据信道的资源、调制编码方案(modulation and coding scheme,MCS)以及预编码等下行信道的相关配置信息。终端设备通过测量下行参考信号来计算下行CSI,并生成CSI报告反馈给网络设备。In a communication system, network devices determine the downlink data channel resources, modulation and coding scheme (MCS), and precoding and other relevant downlink channel configuration information for scheduling terminal devices based on downlink channel state information (CSI). Terminal devices calculate the downlink CSI by measuring the downlink reference signal and generate a CSI report, which is then fed back to the network devices.

将人工智能(artificial intelligence,AI)引入无线通信后,出现了一种基于AI的CSI反馈方式。AI模型具有更强的特征提取能力,能够更有效地对信道信息进行处理,例如,对信道信息进行预测和/或压缩。The introduction of artificial intelligence (AI) into wireless communication has led to the emergence of an AI-based CSI feedback mechanism. AI models possess stronger feature extraction capabilities, enabling them to process channel information more effectively, such as predicting and/or compressing it.

当前,网络设备向终端设备发送的参考信号配置灵活可变,可能导致AI模型在对信道信息处理的过程中出现性能波动。例如,以终端设备侧的AI模型为AI CSI压缩模型为例,可能导致CSI反馈的性能波动,又例如,以终端设备侧的AI模型为AI CSI预测模型为例,可能导致CSI预测的性能波动。Currently, the configuration of reference signals sent by network devices to terminal devices is flexible and variable, which may cause performance fluctuations in AI models during channel information processing. For example, if the AI model on the terminal device side is an AI CSI compression model, it may lead to performance fluctuations in CSI feedback. Similarly, if the AI model on the terminal device side is an AI CSI prediction model, it may lead to performance fluctuations in CSI prediction.

发明内容Summary of the Invention

本申请提供一种通信方法,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。This application provides a communication method that can guarantee or improve the performance of an AI model when a network device performs different resource configurations to a terminal device.

第一方面,提供了一种通信方法,该方法可以由第一装置执行,第一装置可以为第一AI模型侧的设备,也可以为第一AI模型侧的设备的芯片或电路。第一AI模型侧的设备可以替换为终端设备侧的设备,终端设备侧可以包括终端设备或终端设备侧的AI实体中的至少一项。终端设备侧的AI实体可以为终端设备本身,也可以为服务于终端设备的AI实体,例如,服务器,比如:过顶(over the top,OTT)服务器或云端服务器。Firstly, a communication method is provided, which can be executed by a first device. The first device can be a device on the first AI model side, or a chip or circuit of the device on the first AI model side. The device on the first AI model side can be replaced by a device on the terminal device side, which can include at least one of a terminal device or an AI entity on the terminal device side. The AI entity on the terminal device side can be the terminal device itself, or an AI entity serving the terminal device, such as a server, like an over-the-top (OTT) server or a cloud server.

该方法包括:接收第一指示信息,该第一指示信息指示第一资源配置,该第一资源配置为第一人工智能AI模型支持的至少一个资源配置中的一个,其中,该第一AI模型用于对基于该第一资源配置测量得到的信道信息进行处理。The method includes: receiving first indication information, the first indication information indicating a first resource configuration, the first resource configuration being one of at least one resource configuration supported by a first artificial intelligence (AI) model, wherein the first AI model is used to process channel information measured based on the first resource configuration.

其中,第一AI模型支持的资源配置可以理解为,未超出第一AI模型能力范围的资源配置,例如资源配置可以是CSI-RS配置。需要说明的是,在本申请实施例中,终端设备可以根据查询AI模型的技术文档或相关说明来确定AI模型支持的资源配置。The resource configuration supported by the first AI model can be understood as a resource configuration that does not exceed the capabilities of the first AI model; for example, the resource configuration could be a CSI-RS configuration. It should be noted that, in this embodiment, the terminal device can determine the resource configuration supported by the AI model by consulting the AI model's technical documentation or related instructions.

在本申请的技术方案中,终端设备接收网络设备发送的用于指示第一资源配置的第一指示信息,第一资源配置为终端设备侧的第一AI模型支持的资源配置,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In the technical solution of this application, the terminal device receives a first instruction information sent by the network device to indicate a first resource configuration. The first resource configuration is a resource configuration supported by a first AI model on the terminal device side, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.

结合第一方面,在第一方面的某些实现方式中,该资源配置包括:配置类型、相邻资源的偏移或者发送次数,该配置类型包括以下至少一项:周期配置、半静态配置或者非周期配置。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the resource configuration includes: configuration type, offset of adjacent resources, or number of transmissions. The configuration type includes at least one of the following: periodic configuration, semi-static configuration, or aperiodic configuration. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device.

结合第一方面,在第一方面的某些实现方式中,该第一AI模型用于对基于该第一资源配置测量得到的信道信息进行处理,还包括:确定第一信道报告,该第一信道报告是基于该第一AI模型对该第一资源配置测量得到的信道信息进行处理确定的。基于上述技术方案,终端设备通过第一AI模型对基于第一资源配置测量得到的信道信息进行处理并生成第一信道报告,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the first AI model is used to process channel information measured based on the first resource configuration, and further includes: determining a first channel report, which is determined based on the processing of the channel information measured based on the first resource configuration by the first AI model. Based on the above technical solution, the terminal device processes the channel information measured based on the first resource configuration using the first AI model and generates a first channel report, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations on the terminal device.

应理解,第一信道报告可以替换为第一CSI报告,或,第一CSI反馈信息,或者,第一CSI压缩信息等。本申请对此不作限制。It should be understood that the first channel report can be replaced by the first CSI report, or the first CSI feedback information, or the first CSI compressed information, etc. This application does not impose any limitations on this.

结合第一方面,在第一方面的某些实现方式中,该方法还包括:发送第一信息,该第一信息关联该至少一个资源配置。基于上述技术方案,终端设备向网络设备发送第一信息,第一信息关联第一AI模型支持的至少一个资源配置,网络设备从该至少一个资源配置中选择第一资源配置下发给终端设备,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the method further includes: sending first information, which is associated with the at least one resource configuration. Based on the above technical solution, the terminal device sends first information to the network device, the first information being associated with at least one resource configuration supported by the first AI model. The network device selects the first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device provides different resource configurations to the terminal device.

结合第一方面,在第一方面的某些实现方式中,该第一信息包括该至少一个资源配置。基于上述技术方案,终端设备向网络设备上报第一AI模型支持的至少资源配置,网络设备从该至少一个资源配置中选择第一资源配置下发给终端设备,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the first information includes the at least one resource configuration. Based on the above technical solution, the terminal device reports at least one resource configuration supported by the first AI model to the network device, and the network device selects the first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device provides different resource configurations to the terminal device.

可选地,该第一信息还可以包括第二AI模型支持的至少一个资源配置,其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。Optionally, the first information may also include at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.

结合第一方面,在第一方面的某些实现方式中,该第一信息包括该第一AI模型的标识信息。基于上述技术方案,终端设备向网络设备发送第一AI模型的标识信息,网络设备基于第一AI模型的标识信息确定第一AI模型支持的至少一个资源配置,并从该至少一个资源配置中选择第一资源配置下发给终端设备,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the first information includes the identification information of the first AI model. Based on the above technical solution, the terminal device sends the identification information of the first AI model to the network device. The network device determines at least one resource configuration supported by the first AI model based on the identification information of the first AI model, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device. This can guarantee or improve the performance of the AI model when the network device performs different resource configurations on the terminal device.

结合第一方面,在第一方面的某些实现方式中,该方法还包括:发送第二指示信息,该第二指示信息指示该第一AI模型的标识信息和该至少一个资源配置的对应关系。基于上述技术方案,网络设备根据第一AI模型的标识信息和第二指示信息确定第一AI模型支持的至少一个资源配置,并从该至少一个资源配置中选择第一资源配置下发给终端设备,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the method further includes: sending second indication information, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration. Based on the above technical solution, the network device determines at least one resource configuration supported by the first AI model according to the identification information of the first AI model and the second indication information, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.

结合第一方面,在第一方面的某些实现方式中,该第二指示信息还指示第二AI模型的标识信息和该第二AI模型支持的至少一个资源配置的对应关系,该第二AI模型不同于该第一AI模型。基于上述技术方案,网络设备根据第一AI模型的标识信息和第二指示信息确定第一AI模型支持的至少一个资源配置,并从该至少一个资源配置中选择第一资源配置下发给终端设备,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, the second instruction information further indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model is different from the first AI model. Based on the above technical solution, the network device determines at least one resource configuration supported by the first AI model according to the identification information of the first AI model and the second instruction information, and selects a first resource configuration from the at least one resource configuration and sends it to the terminal device, which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device.

应理解,第二AI模型可以是不同于第一AI模型的一个或多个AI模型。It should be understood that the second AI model can be one or more AI models that are different from the first AI model.

结合第一方面,在第一方面的某些实现方式中,该发送第二指示信息,包括:发送第二信息,该第二信息包括该第二指示信息,该第二信息包括以下任意一项:模型相关信息、注册请求信息、终端设备的能力信息。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, in some implementations of the first aspect, sending the second instruction information includes: sending second information, which includes the second instruction information, and the second information includes any one of the following: model-related information, registration request information, and terminal device capability information. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device.

结合第一方面,在第一方面的某些实现方式中,该第一信息包括该第一AI模型的功能信息,该功能信息包括以下任意一项:In conjunction with the first aspect, in some implementations of the first aspect, the first information includes functional information of the first AI model, which includes any one of the following:

时域信道状态信息CSI预测后压缩功能信息、Time-domain channel state information, CSI prediction post-compression function information,

空域和频域CSI压缩功能信息、Spatial and frequency domain CSI compression function information,

空域,频域以及时域CSI压缩功能信息、Spatial, frequency, and time domain CSI compression function information

时域CSI预测功能信息、Time-domain CSI prediction function information

波束时域预测功能信息。Beam temporal prediction function information.

基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。Based on the above technical solutions, the performance of AI models can be guaranteed or improved when network devices allocate different resources to terminal devices.

结合第一方面,在第一方面的某些实现方式中,包括:发送第二信道报告和第三指示信息,该第三指示信息指示该第二信道报告关联第三AI模型,该第二信道报告是基于该第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。In conjunction with the first aspect, some implementations of the first aspect include: sending a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated based on channel information obtained from second resource configuration measurements processed by the third AI model. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device.

可选地,第三指示信息可以是第三AI模型的标识信息。Optionally, the third instruction information may be the identification information of a third AI model.

可选地,第三AI模型可以是前文该的第一AI模型,第二资源配置也可以是前文该的第一资源配置,相应地,第二信道报告也可以是前文该的第一信道报告。在这种情况下,终端设备向网络设备发送第二信道报告和第三指示信息,以便网络设备使用与第三AI模型对应的AI模型对第二信道报告进行解压缩,以得到恢复后的信道信息。Optionally, the third AI model can be the first AI model described above, the second resource configuration can be the first resource configuration described above, and correspondingly, the second channel report can be the first channel report described above. In this case, the terminal device sends the second channel report and the third indication information to the network device, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.

结合第一方面,在第一方面的某些实现方式中,该方法还包括:接收第四指示信息,该第四指示信息指示该第二信道报告的配置信息,该第二信道报告的配置信息包括该第二信道报告的最大反馈开销。基于上述技术方案,终端设备实时匹配网络设备发送的信道报告的配置信息,挑选出最合适的AI模型(例如第三AI模型),能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the first aspect, in some implementations of the first aspect, the method further includes: receiving fourth indication information, the fourth indication information indicating the configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report. Based on the above technical solution, the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model (e.g., a third AI model), which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.

结合第一方面,在第一方面的某些实现方式中,该方法还包括:根据该第二资源配置确定该第二信道报告的信息。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the first aspect, in some implementations of the first aspect, the method further includes: determining the information of the second channel report based on the second resource configuration. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.

结合第一方面,在第一方面的某些实现方式中,该第二信道报告的信息包括以下至少一项:一个或多个测量资源对应的资源个数、一个或多个测量资源对应的测量结果、一个或多个测量资源对应的时域信息、一个或多个测量资源对应的频域信息、一个或多个测量资源对应的空域信息、一个或多个测量资源中每个资源信息对应的比特个数。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the first aspect, in certain implementations of the first aspect, the information reported by the second channel includes at least one of the following: the number of resources corresponding to one or more measurement resources, the measurement results corresponding to one or more measurement resources, the time-domain information corresponding to one or more measurement resources, the frequency-domain information corresponding to one or more measurement resources, the spatial-domain information corresponding to one or more measurement resources, and the number of bits corresponding to each resource information in one or more measurement resources. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.

第二方面,提供了一种通信方法,该方法可以由第一装置执行,第一装置可以为第一AI模型侧的设备,也可以为第一AI模型侧的设备的芯片或电路。第一AI模型侧的设备可以替换为终端设备侧的设备,终端设备侧可以包括终端设备或终端设备侧的AI实体中的至少一项。终端设备侧的AI实体可以为终端设备本身,也可以为服务于终端设备的AI实体,例如,服务器,比如:过顶(over the top,OTT)服务器或云端服务器。Secondly, a communication method is provided, which can be executed by a first device. The first device can be a device on the first AI model side, or a chip or circuit of the device on the first AI model side. The device on the first AI model side can be replaced by a device on the terminal device side. The terminal device side can include at least one of a terminal device or an AI entity on the terminal device side. The AI entity on the terminal device side can be the terminal device itself, or an AI entity serving the terminal device, such as a server, such as an over-the-top (OTT) server or a cloud server.

该方法包括:发送第二信道报告和第三指示信息,该第三指示信息指示该第二信道报告关联第三AI模型,该第二信道报告是基于该第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。The method includes: sending a second channel report and a third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing channel information obtained from second resource configuration measurements based on the third AI model.

在本申请的技术方案中,终端设备实时匹配网络设备发送的信道报告的配置信息,挑选出最合适的AI模型(例如第三AI模型),并向网络设备发送第三指示信息,以便网络设备使用与第三AI模型对应的AI模型对第二信道报告进行解压缩,以得到恢复后的信道信息。能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In the technical solution of this application, the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model (e.g., the third AI model), and sends third indication information to the network device so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information. This can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.

结合第二方面,在第二方面的某些实现方式中,该方法还包括:接收第四指示信息,该第四指示信息指示该第二信道报告的配置信息,该第二信道报告的配置信息包括该第二信道报告的最大反馈开销。基于上述技术方案,终端设备实时匹配网络设备发送的信道报告的配置信息,挑选出最合适的AI模型(例如第三AI模型),能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the second aspect, in some implementations of the second aspect, the method further includes: receiving fourth indication information, which indicates the configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report. Based on the above technical solution, the terminal device matches the configuration information of the channel report sent by the network device in real time and selects the most suitable AI model (e.g., a third AI model), which can guarantee or improve the performance of the AI model when the network device performs different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.

结合第二方面,在第二方面的某些实现方式中,该方法还包括:根据该第二资源配置确定该第二信道报告的信息。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the second aspect, in some implementations of the second aspect, the method further includes: determining the information of the second channel report based on the second resource configuration. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.

结合第二方面,在第二方面的某些实现方式中,该第二信道报告的信息包括以下至少一项:一个或多个测量资源对应的资源个数、一个或多个测量资源对应的测量结果、一个或多个测量资源对应的时域信息、一个或多个测量资源对应的频域信息、一个或多个测量资源对应的空域信息、一个或多个测量资源中每个资源信息对应的比特个数。基于上述技术方案,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。In conjunction with the second aspect, in certain implementations of the second aspect, the information reported by the second channel includes at least one of the following: the number of resources corresponding to one or more measurement resources, the measurement results corresponding to one or more measurement resources, the time-domain information corresponding to one or more measurement resources, the frequency-domain information corresponding to one or more measurement resources, the spatial-domain information corresponding to one or more measurement resources, and the number of bits corresponding to each resource information in one or more measurement resources. Based on the above technical solution, the performance of the AI model can be guaranteed or improved when the network device performs different resource configurations to the terminal device, and the performance of CSI feedback can be further guaranteed.

第三方面,提供了一种通信方法,该方法可以由第二装置执行,第二装置可以包括网络设备或网络设备侧的AI实体中的至少一项。网络设备侧的AI实体可以为网络设备本身,也可以为服务于网络设备的AI实体(也可以替换为智能网元等,对此不予限定),例如,无线接入网(radio access network,RAN),智能控制器(RAN intelligent controller,RIC),操作维护管理(operation administration and maintenance,OAM),服务器等,例如云端服务器。其中,智能网元也可以理解为带有AI功能的网络设备,可以应用于O-RAN架构中,也可以应用于非O-RAN架构中,对此不予限定。Thirdly, a communication method is provided, which can be executed by a second device. The second device may include at least one of a network device or an AI entity on the network device side. The AI entity on the network device side can be the network device itself or an AI entity serving the network device (it can also be replaced by intelligent network elements, etc., without limitation), such as a radio access network (RAN), an intelligent controller (RIC), operation administration and maintenance (OAM), a server, etc., such as a cloud server. Here, an intelligent network element can also be understood as a network device with AI functionality, which can be applied in O-RAN architecture or non-O-RAN architecture, without limitation.

该方法包括:确定第一资源配置,该第一资源配置为第一AI模型支持的至少一个资源配置中的一个,该第一AI模型用于对基于该第一资源配置测量得到的信道信息进行处理;发送第一指示信息,该第一指示信息指示该第一资源配置。The method includes: determining a first resource configuration, the first resource configuration being one of at least one resource configuration supported by a first AI model, the first AI model being used to process channel information measured based on the first resource configuration; and sending first indication information, the first indication information indicating the first resource configuration.

应理解,第三方面及其任一种实现方式的有益效果可以参考第一方面及其任一种实现方式的有益效果。It should be understood that the beneficial effects of the third aspect and any of its implementations can be referenced by the beneficial effects of the first aspect and any of its implementations.

结合第三方面,在第三方面的某些实现方式中,该资源配置包括:配置类型、相邻资源的偏移或者发送次数,该配置类型包括以下至少一项:周期配置、半静态配置或者非周期配置。In conjunction with the third aspect, in some implementations of the third aspect, the resource configuration includes: configuration type, offset of adjacent resources or number of transmissions, and the configuration type includes at least one of the following: periodic configuration, semi-static configuration or non-periodic configuration.

结合第三方面,在第三方面的某些实现方式中,该方法还包括:获取第一信息,该第一信息关联该至少一个资源配置。In conjunction with the third aspect, in some implementations of the third aspect, the method further includes: obtaining first information associated with the at least one resource configuration.

结合第三方面,在第三方面的某些实现方式中,该第一信息包括该至少一个资源配置。In conjunction with the third aspect, in some implementations of the third aspect, the first information includes the at least one resource configuration.

结合第三方面,在第三方面的某些实现方式中,该第一信息包括该第一AI模型的标识信息。In conjunction with the third aspect, in some implementations of the third aspect, the first information includes the identification information of the first AI model.

结合第三方面,在第三方面的某些实现方式中,该方法还包括:获取第二指示信息,该第二指示信息指示该第一AI模型的标识信息和该至少一个资源配置的对应关系。In conjunction with the third aspect, in some implementations of the third aspect, the method further includes: obtaining second indication information, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration.

结合第三方面,在第三方面的某些实现方式中,该第二指示信息还指示第二AI模型的标识信息和该第二AI模型支持的至少一个资源配置的对应关系,该第二AI模型不同于该第一AI模型。In conjunction with the third aspect, in some implementations of the third aspect, the second instruction information also indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, the second AI model being different from the first AI model.

结合第三方面,在第三方面的某些实现方式中,该接收第二指示信息,包括:获取第二信息,该第二信息包括该第二指示信息,该第二信息包括以下任意一项:模型相关信息、注册请求信息、终端设备的能力信息。In conjunction with the third aspect, in some implementations of the third aspect, receiving the second instruction information includes: obtaining second information, which includes the second instruction information, and the second information includes any one of the following: model-related information, registration request information, and terminal device capability information.

结合第三方面,在第三方面的某些实现方式中,该第一信息包括该第一AI模型的功能信息,该功能信息包括以下任意一项:In conjunction with the third aspect, in some implementations of the third aspect, the first information includes functional information of the first AI model, which includes any one of the following:

时域CSI预测后压缩功能信息、Information on time-domain CSI prediction followed by compression function

空域和频域CSI压缩功能信息、Spatial and frequency domain CSI compression function information,

空域,频域以及时域CSI压缩功能信息、Spatial, frequency, and time domain CSI compression function information

时域CSI预测功能信息、Time-domain CSI prediction function information

波束时域预测功能信息。Beam temporal prediction function information.

结合第三方面,在第三方面的某些实现方式中,包括:获取第二信道报告和第三指示信息,该第三指示信息指示该第二信道报告关联第三AI模型,该第二信道报告是基于该第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。In conjunction with the third aspect, some implementations of the third aspect include: acquiring a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model.

结合第三方面,在第三方面的某些实现方式中,该方法还包括:发送第四指示信息,该第四指示信息指示该第二信道报告的配置信息,该第二信道报告的配置信息包括该第二信道报告的最大反馈开销。In conjunction with the third aspect, in some implementations of the third aspect, the method further includes: sending fourth indication information, the fourth indication information indicating configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.

结合第三方面,在第三方面的某些实现方式中,该第二信道报告的信息包括以下至少一项:In conjunction with the third aspect, in certain implementations of the third aspect, the information reported by the second channel includes at least one of the following:

一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources

一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources

一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources

一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources

一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources

一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources.

第四方面,提供了一种通信方法,该方法可以由第二装置执行,第二装置可以包括网络设备或网络设备侧的AI实体中的至少一项。网络设备侧的AI实体可以为网络设备本身,也可以为服务于网络设备的AI实体,例如,无线接入网(radio access network,RAN),智能控制器(RAN intelligent controller,RIC),操作维护管理(operation administration and maintenance,OAM)。Fourthly, a communication method is provided, which can be executed by a second device, the second device including at least one of a network device or an AI entity on the network device side. The AI entity on the network device side can be the network device itself or an AI entity serving the network device, such as a radio access network (RAN), a RAN intelligent controller (RIC), or an operation administration and maintenance (OAM) system.

该方法包括:获取第二信道报告和第三指示信息,该第三指示信息指示该第二信道报告关联第三AI模型,该第二信道报告是基于该第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。The method includes: acquiring a second channel report and third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing channel information obtained from second resource configuration measurements based on the third AI model.

应理解,第四方面及其任一种实现方式的有益效果可以参考第二方面及其任一种实现方式的有益效果。It should be understood that the beneficial effects of the fourth aspect and any of its implementations can be referenced from the beneficial effects of the second aspect and any of its implementations.

结合第四方面,在第四方面的某些实现方式中,该方法还包括:发送第四指示信息,该第四指示信息指示该第二信道报告的配置信息,该第二信道报告的配置信息包括该第二信道报告的最大反馈开销。In conjunction with the fourth aspect, in some implementations of the fourth aspect, the method further includes: sending fourth indication information, which indicates configuration information of the second channel report, the configuration information of the second channel report including the maximum feedback overhead of the second channel report.

结合第四方面,在第四方面的某些实现方式中,该第二信道报告的信息包括以下至少一项:In conjunction with the fourth aspect, in certain implementations of the fourth aspect, the information reported by the second channel includes at least one of the following:

一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources

一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources

一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources

一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources

一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources

一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources.

第五方面,提供了一种通信装置,该通信装置可以是第一装置,也可以为用于执行第一装置功能的设备或者模块等。Fifthly, a communication device is provided, which may be a first device, or a device or module for performing the functions of the first device.

一种可能的实现方式,该通信装置可以包括用于执行第一方面所描述的方法/操作/步骤/动作所一一对应的模块或单元,该模块或单元可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。One possible implementation is that the communication device may include modules or units corresponding to the methods/operations/steps/actions described in the first aspect, which may be hardware circuits, software, or a combination of hardware circuits and software.

上述的第一装置可以为终端设备或终端设备侧的AI实体,对此不予限定。The first device mentioned above can be a terminal device or an AI entity on the terminal device side, and there is no limitation thereto.

第六方面,提供了一种通信装置,该通信装置可以是第二装置,也可以为用于执行第二装置功能的设备或者模块等。In a sixth aspect, a communication device is provided, which may be a second device, or a device or module for performing the functions of the second device.

一种可能的实现方式,该通信装置可以包括用于执行第二方面所描述的方法/操作/步骤/动作所一一对应的模块或单元,该模块或单元可以是硬件电路,也可是软件,也可以是硬件电路结合软件实现。One possible implementation is that the communication device may include modules or units corresponding to the methods/operations/steps/actions described in the second aspect, which may be hardware circuits, software, or a combination of hardware circuits and software.

上述的第二装置可以为网络设备或网络设备侧的AI实体。The second device mentioned above can be a network device or an AI entity on the network device side.

第七方面,提供了一种通信装置,包括处理器,该处理器用于,通过执行计算机程序或指令,或者,通过逻辑电路,使得该通信装置执行第一方面以及第一方面的任一种可能方式中所述的方法;或者使得该通信装置执行第二方面以及第二方面的任一种可能方式中所述的方法;或者使得该通信装置执行第三方面以及第三方面的任一种可能方式中所述的方法;或者使得该通信装置执行第四方面以及第四方面的任一种可能方式中所述的方法。A seventh aspect provides a communication device including a processor configured to, by executing a computer program or instructions, or by logic circuitry, cause the communication device to perform the method described in the first aspect and any possible manner of the first aspect; or cause the communication device to perform the method described in the second aspect and any possible manner of the second aspect; or cause the communication device to perform the method described in the third aspect and any possible manner of the third aspect; or cause the communication device to perform the method described in the fourth aspect and any possible manner of the fourth aspect.

一种可能的实现方式,该通信装置还包括存储器,其用于存储该计算机程序或指令。In one possible implementation, the communication device also includes a memory for storing the computer program or instructions.

一种可能的实现方式,该通信装置还包括通信接口,其用于输入和/或输出信号。In one possible implementation, the communication device also includes a communication interface for inputting and/or outputting signals.

第八方面,提供了一种通信装置,包括逻辑电路和输入输出接口,该输入输出接口用于输入和/或输出信号,该逻辑电路用于执行第一方面以及第一方面的任一种可能方式中所述的方法;或者该逻辑电路用于执行第二方面以及第二方面的任一种可能方式中所述的方法;或者该逻辑电路用于执行第三方面以及第三方面的任一种可能方式中所述的方法;或者该逻辑电路用于执行第四方面以及第四方面的任一种可能方式中所述的方法。Eighthly, a communication device is provided, including logic circuitry and an input/output interface for inputting and/or outputting signals. The input/output interface is configured to perform the method described in the first aspect and any possible mode of the first aspect; or the logic circuitry is configured to perform the method described in the second aspect and any possible mode of the second aspect; or the logic circuitry is configured to perform the method described in the third aspect and any possible mode of the third aspect; or the logic circuitry is configured to perform the method described in the fourth aspect and any possible mode of the fourth aspect.

第九方面,提供了一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序或指令,当该计算机程序或该指令在计算机上运行时,使得第一方面以及第一方面的任一种可能中所述的方法被执行;或者,使得第二方面以及第二方面的任一种可能方式中所述的方法被执行;或者,使得第三方面以及第三方面的任一种可能中所述的方法被执行;或者,使得第四方面以及第四方面的任一种可能方式中所述的方法被执行。A ninth aspect provides a computer-readable storage medium storing a computer program or instructions that, when executed on a computer, cause the method described in the first aspect and any possible manner of the first aspect to be performed; or cause the method described in the second aspect and any possible manner of the second aspect to be performed; or cause the method described in the third aspect and any possible manner of the third aspect to be performed; or cause the method described in the fourth aspect and any possible manner of the fourth aspect to be performed.

第十方面,提供了一种计算机程序产品,包含指令,当该指令在计算机上运行时,使得第一方面以及第一方面的任一种可能方式中所述的方法被执行;或者,使得第二方面以及第二方面的任一种可能方式中所述的方法被执行;或者,使得第三方面以及第三方面的任一种可能方式中所述的方法被执行;或者,使得第四方面以及第四方面的任一种可能方式中所述的方法被执行。In a tenth aspect, a computer program product is provided, comprising instructions that, when executed on a computer, cause the method described in the first aspect and any possible mode of the first aspect to be executed; or cause the method described in the second aspect and any possible mode of the second aspect to be executed; or cause the method described in the third aspect and any possible mode of the third aspect to be executed; or cause the method described in the fourth aspect and any possible mode of the fourth aspect to be executed.

第十一方面,提供了一种芯片或芯片系统,包括:一个或多个处理器,该处理器用于执行该存储器中的计算机程序或指令,使得该芯片或芯片系统实现第一方面以及第一方面的任一种可能实现方式中的方法;或者,使得该芯片或芯片系统实现第二方面以及第二方面的任一种可能实现方式中的方法;或者,使得该芯片或芯片系统实现第三方面以及第三方面的任一种可能实现方式中的方法;或者,使得该芯片或芯片系统实现第四方面以及第四方面的任一种可能实现方式中的方法。Eleventhly, a chip or chip system is provided, comprising: one or more processors configured to execute computer programs or instructions in the memory, such that the chip or chip system implements the methods of the first aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the second aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the third aspect and any possible implementation thereof; or, such that the chip or chip system implements the methods of the fourth aspect and any possible implementation thereof.

关于第五方面至第十一方面中任意方面的有益效果的描述可以参见对于第一方面至第四方面的有益效果的描述,不再赘述。For a description of the beneficial effects of any of the fifth to eleventh aspects, please refer to the description of the beneficial effects of the first to fourth aspects, which will not be repeated here.

附图说明Attached Figure Description

图1是本申请实施例适用的一种应用框架的示意图。Figure 1 is a schematic diagram of an application framework applicable to an embodiment of this application.

图2是本申请实施例适用的另一种应用框架的示意图。Figure 2 is a schematic diagram of another application framework applicable to the embodiments of this application.

图3是本申请实施例适用的一种通信系统的示意图。Figure 3 is a schematic diagram of a communication system applicable to an embodiment of this application.

图4是本申请实施例适用的另一种通信系统的示意图。Figure 4 is a schematic diagram of another communication system applicable to the embodiments of this application.

图5是编码器和解码器之间的关系示意图。Figure 5 is a schematic diagram of the relationship between the encoder and the decoder.

图6为本申请实施例提供的一种通信方法600的示意性流程图。Figure 6 is a schematic flowchart of a communication method 600 provided in an embodiment of this application.

图7为本申请实施例提供的一种通信方法700的示意性流程图。Figure 7 is a schematic flowchart of a communication method 700 provided in an embodiment of this application.

图8为本申请实施例提供的一种通信方法800的示意性流程图。Figure 8 is a schematic flowchart of a communication method 800 provided in an embodiment of this application.

图9为本申请实施例提供的一种通信方法900的示意性流程图。Figure 9 is a schematic flowchart of a communication method 900 provided in an embodiment of this application.

图10为本申请实施例提供的一种通信方法1000的示意性流程图。Figure 10 is a schematic flowchart of a communication method 1000 provided in an embodiment of this application.

图11为本申请实施例提供的一种通信方法1100的示意性流程图。Figure 11 is a schematic flowchart of a communication method 1100 provided in an embodiment of this application.

图12是本申请实施例的又一种通信方法1200的示意性流程图。Figure 12 is a schematic flowchart of another communication method 1200 according to an embodiment of this application.

图13是本申请实施例的又一种通信方法1300的示意性流程图。Figure 13 is a schematic flowchart of another communication method 1300 according to an embodiment of this application.

图14是本申请实施例的又一种通信方法1400的示意性流程图。Figure 14 is a schematic flowchart of another communication method 1400 according to an embodiment of this application.

图15是本申请实施例的通信装置的一种示意框图。Figure 15 is a schematic block diagram of a communication device according to an embodiment of this application.

图16是本申请实施例的通信装置的另一种示意框图。Figure 16 is another schematic block diagram of a communication device according to an embodiment of this application.

具体实施方式Detailed Implementation

下面将结合附图,对本申请中的技术方案进行描述。The technical solutions in this application will now be described with reference to the accompanying drawings.

为了便于理解本申请实施例,首先做出以下几点说明。To facilitate understanding of the embodiments of this application, the following points will be explained first.

一、本申请中,除非另有说明,“多个”的含义是两个或两个以上。I. In this application, unless otherwise stated, "multiple" means two or more.

二、本申请各个实施例中,如果没有特殊说明以及逻辑冲突,不同实施例之间的术语和/或描述具有一致性、且可以相互引用,不同实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。II. In the various embodiments of this application, unless otherwise specified or in case of logical conflict, the terms and/or descriptions between different embodiments are consistent and can be referenced by each other. The technical features in different embodiments can be combined to form new embodiments according to their inherent logical relationship.

三、本申请涉及的各种数字编号仅为描述方便进行的区分,并不用来限制本申请的保护范围。本申请涉及的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定。例如,本申请的说明书和权利要求书及附图中的术语“第一”、“第二”、“第三”、“第四”以及其他各种术语标号等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。其中,这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。III. The various numerical designations used in this application are merely for descriptive convenience and are not intended to limit the scope of protection of this application. The magnitude of the serial numbers used in this application does not imply a sequential order of execution; the execution order of each process should be determined by its function and internal logic. For example, the terms "first," "second," "third," "fourth," and other various terminology (if present) in the specification, claims, and drawings of this application are used to distinguish similar objects and are not necessarily used to describe a specific order or sequence. Such data can be interchanged where appropriate so that the embodiments described herein can be implemented in a sequence other than that illustrated or described herein.

同时,本申请被描述为“示例性地”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念,便于理解。Furthermore, any embodiment or design described in this application as "exemplary" or "for example" should not be construed as being more preferred or advantageous than other embodiments or designs. Specifically, the use of terms such as "exemplary" or "for example" is intended to present the relevant concepts in a concrete manner for ease of understanding.

四、术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。IV. The terms “comprising” and “having” and any variations thereof are intended to cover non-exclusive inclusion, for example, a process, method, system, product or device that includes a series of steps or units is not necessarily limited to those steps or units that are expressly listed, but may include other steps or units that are not expressly listed or that are inherent to such process, method, product or device.

五、本申请中,“用于指示”可以理解为“使能”,“使能”可以包括直接使能和间接使能。当描述某一信息用于使能A时,可以包括该信息直接使能A或间接使能A,而并不代表该信息中一定携带有A。V. In this application, "for indicating" can be understood as "enabling". "Enabling" can include direct enabling and indirect enabling. When describing information for enabling A, it can include whether the information directly enables A or indirectly enables A, but it does not mean that the information necessarily carries A.

将信息所使能的信息称为待使能信息,则具体实现过程中,对待使能信息进行使能的方式有很多种,例如但不限于,可以直接使能待使能信息,如待使能信息本身或者该待使能信息的索引等。也可以通过使能其他信息来间接使能待使能信息,其中该其他信息与待使能信息之间存在关联关系。还可以仅仅使能待使能信息的一部分,而待使能信息的其他部分则是已知的或者提前约定的。例如,还可以借助预先约定(例如协议规定)的各个信息的排列顺序来实现对特定信息的使能,从而在一定程度上降低使能开销。同时,还可以识别各个信息的通用部分并统一使能,以降低单独使能同样的信息而带来的使能开销。The information that enables the information is called the information to be enabled. In the specific implementation process, there are many ways to enable the information to be enabled, such as, but not limited to, directly enabling the information to be enabled, such as the information to be enabled itself or its index. It can also be indirectly enabled by enabling other information, where there is a relationship between the other information and the information to be enabled. It can also enable only a part of the information to be enabled, while the other parts are known or pre-agreed upon. For example, enabling specific information can be achieved by using a pre-agreed (e.g., protocol-defined) arrangement of various pieces of information, thereby reducing enabling overhead to some extent. Simultaneously, common parts of various pieces of information can be identified and enabled uniformly to reduce the enabling overhead caused by individually enabling the same information.

六、本申请中,“预配置”可包括预先定义,例如,协议定义。其中,“预先定义”可以通过在设备(例如,包括各个网元)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。VI. In this application, "pre-configuration" may include pre-defined terms, such as protocol definitions. These "pre-defined terms" can be implemented by pre-storing corresponding codes, tables, or other means of indicating relevant information in the device (e.g., including various network elements). This application does not limit the specific implementation method.

七、本申请涉及的“存储”或“保存”,可以是指保存在一个或者多个存储器中。所述一个或者多个存储器,可以是单独设置,也可以是集成在编码器或者解码器,处理器、或通信装置中。所述一个或者多个存储器,也可以是一部分单独设置,一部分集成在解码器、处理器、或通信装置中。存储器的类型可以是任意形式的存储介质,对此不予限定。VII. The term "storage" or "preservation" in this application can refer to storage in one or more memory devices. These memory devices can be separately configured or integrated into an encoder or decoder, processor, or communication device. Alternatively, some memory devices can be separately configured, while others can be integrated into the decoder, processor, or communication device. The type of memory can be any form of storage medium, and this is not limited.

八、本申请涉及的“协议”可以是指通信领域的标准协议,例如可以包括第四代(4th generation,4G)网络、第五代(5th generation,5G)网络协议、新空口(new radio,NR)协议、5.5G网络协议、未来网络协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。8. The “protocol” involved in this application may refer to standard protocols in the field of communications, such as fourth-generation (4G) network protocols, fifth-generation (5G) network protocols, new radio (NR) protocols, 5.5G network protocols, future network protocols, and related protocols applied in future communication systems. This application does not limit the scope of the term.

九、本申请说明书附图部分的示意图中的虚线所示的箭头或方框表示可选的步骤或可选的模块。9. The arrows or boxes indicated by dashed lines in the schematic diagrams in the accompanying drawings of this application represent optional steps or optional modules.

十、本申请中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。10. In this application, unless otherwise stated, "/" indicates that the objects before and after are in an "or" relationship. For example, A/B can mean A or B. In this application, "and/or" is merely a description of the relationship between the related objects, indicating that there can be three relationships. For example, A and/or B can mean: A exists alone, A and B exist simultaneously, and B exists alone. A and B can be singular or plural.

十一、本申请中,指示包括直接指示(也称为显式指示)和隐式指示。直接指示信息A,是指包括该信息A。隐式指示信息A,是指通过信息A和信息B的对应关系以及直接指示信息B,来指示信息A。信息A和信息B的对应关系可以是预定义的,预存储的,预烧制的,或者,预先配置的。XI. In this application, the indication includes direct indication (also known as explicit indication) and implicit indication. Direct indication information A means including information A. Implicit indication information A means indicating information A through the correspondence between information A and information B, and through direct indication information B. The correspondence between information A and information B can be predefined, pre-stored, pre-burned, or pre-configured.

十二、本申请中,信息C用于信息D的确定,既包括信息D仅基于信息C来确定,也包括基于信息C和其他信息来确定。此外,信息C用于信息D的确定,还可以间接确定的情况,比如,信息D基于信息E确定,而信息E基于信息C确定这种情况。12. In this application, information C is used to determine information D, including both when information D is determined solely based on information C and when it is determined based on information C and other information. Furthermore, information C can also be used to determine information D indirectly, for example, when information D is determined based on information E, and information E is determined based on information C.

十三、本申请中,“装置A向装置B发送信息A”,可以理解为该信息A的目的端或与目的端之间的传输路径中的中间网元是装置B,可以包括直接或间接的向装置B发送信息。Thirteen, in this application, "device A sends information A to device B" can be understood as device B being the destination of information A or an intermediate network element in the transmission path between the destination and the destination, which may include sending information to device B directly or indirectly.

十四、本申请中,“装置B从装置A接收信息A”,可以理解为该信息A的源端或与该源端之间的传输路径中的中间网元是装置A,可以包括直接或间接的从装置A接收信息。信息在信息发送的源端和目的端之间可能会被进行必要的处理,例如格式变化等,但目的端可以理解来自源端的有效信息。本申请中类似的表述可以做类似的理解,在此不予赘述。14. In this application, "device B receives information A from device A" can be understood as device A being the source of information A or an intermediate network element in the transmission path between device B and device A. This can include receiving information directly or indirectly from device A. Information may undergo necessary processing, such as format changes, between the source and destination ends, but the destination end can understand the valid information from the source end. Similar expressions in this application can be interpreted similarly and will not be elaborated further here.

首先对本申请实施例适用的通信系统进行描述。First, the communication system to which the embodiments of this application are applicable will be described.

本申请提供的技术方案可以应用于各种通信系统,例如:5G或NR系统、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)系统、无线局域网(wireless local area network,WLAN)系统、卫星通信系统、未来的通信系统,如未来移动通信系统,或者多种系统的融合系统等。本申请提供的技术方案还可以应用于设备到设备(device to device,D2D)通信,车到万物(vehicle-to-everything,V2X)通信,机器到机器(machine to machine,M2M)通信,机器类型通信(machine type communication,MTC),以及物联网(internet of things,IoT)通信系统或者其他通信系统。The technical solutions provided in this application can be applied to various communication systems, such as 5G or NR systems, Long Term Evolution (LTE) systems, LTE Frequency Division Duplex (FDD) systems, LTE Time Division Duplex (TDD) systems, Wireless Local Area Network (WLAN) systems, satellite communication systems, future communication systems such as future mobile communication systems, or integrated systems of multiple systems. The technical solutions provided in this application can also be applied to device-to-device (D2D) communication, vehicle-to-everything (V2X) communication, machine-to-machine (M2M) communication, machine-type communication (MTC), and Internet of Things (IoT) communication systems or other communication systems.

通信系统中的一个装置可以向另一个装置发送信号或从另一个装置接收信号。信号可以包括信息、信令或者数据等。装置也可以被替换为实体、网络实体、设备、通信设备、通信模块、节点、通信节点等,本申请以装置为例进行描述。例如,通信系统可以包括至少一个终端设备和至少一个网络设备。网络设备可以向终端设备发送下行信号,和/或终端设备可以向网络设备发送上行信号,和/或,终端设备向另一终端设备可以发送侧行信号,和/或,网络设备可以向另一网络设备发送信号。In a communication system, a device can send signals to or receive signals from another device. Signals may include information, signaling, or data. A device can also be replaced by an entity, network entity, equipment, communication device, communication module, node, communication node, etc. This application describes a device as an example. For instance, a communication system may include at least one terminal device and at least one network device. The network device can send downlink signals to the terminal device, and/or the terminal device can send uplink signals to the network device, and/or, the terminal device can send sidelink signals to another terminal device, and/or, the network device can send signals to another network device.

本申请实施例中,终端设备也可以称为用户设备(user equipment,UE)、接入终端、用户单元、用户站、移动站、移动台、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。In the embodiments of this application, the terminal device may also be referred to as user equipment (UE), access terminal, user unit, user station, mobile station, mobile station, remote station, remote terminal, mobile device, user terminal, terminal, wireless communication device, user agent, or user device.

终端设备可以是一种提供语音/数据的设备,例如,具有无线连接功能的手持式设备、车载设备等。目前,一些终端的举例为:手机(mobile phone)、平板电脑、笔记本电脑、掌上电脑、移动互联网设备(mobile internet device,MID)、可穿戴设备,虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、无人驾驶(self-driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端、蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、可穿戴设备,5G网络中的终端设备或者未来演进的公用陆地移动通信网络(public land mobile network,PLMN)中的终端设备等,本申请实施例对此并不限定。Terminal devices can be devices that provide voice/data, such as handheld devices and in-vehicle devices with wireless connectivity. Currently, some examples of terminals include: mobile phones, tablets, laptops, PDAs, mobile internet devices (MIDs), wearable devices, virtual reality (VR) devices, augmented reality (AR) devices, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical surgery, wireless terminals in smart grids, and wireless terminals in transportation safety. The embodiments of this application do not limit the scope to wireless terminals in smart cities, smart homes, cellular phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants (PDAs), handheld devices with wireless communication capabilities, computing devices or other processing devices connected to a wireless modem, wearable devices, terminal devices in 5G networks, or terminal devices in future evolved public land mobile networks (PLMNs).

作为示例而非限定,终端设备还可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大的功能。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。As an example, and not a limitation, the terminal device can also be a wearable device. Wearable devices, also known as wearable smart devices, are a general term for devices that utilize wearable technology to intelligently design and develop everyday wearables, such as glasses, gloves, watches, clothing, and shoes. Wearable devices are portable devices that are worn directly on the body or integrated into the user's clothing or accessories. Wearable devices are not merely hardware devices; they achieve powerful functions through software support, data interaction, and cloud interaction. Broadly defined, wearable smart devices include those that are feature-rich, large in size, and can achieve complete or partial functionality without relying on a smartphone, such as smartwatches or smart glasses, as well as those that focus on a specific application function and require the use of other devices such as smartphones, such as various smart bracelets and smart jewelry for vital sign monitoring.

本申请实施例中,用于实现终端设备的功能的装置可以是终端设备,也可以是能够支持终端设备实现该功能的装置,例如芯片系统,该装置可以被安装在终端设备中或者和终端设备匹配使用。本申请实施例中,芯片系统可以由芯片构成,也可以包括芯片和其他分立器件。在本申请实施例中仅以用于实现终端设备的功能的装置为终端设备为例进行说明,不对本申请实施例的方案构成限定。In this embodiment, the device for implementing the functions of the terminal device can be the terminal device itself, or it can be any device capable of supporting the terminal device in implementing those functions, such as a chip system. This device can be installed in or used in conjunction with the terminal device. In this embodiment, the chip system can be composed of chips or may include chips and other discrete components. This embodiment only uses the terminal device as an example to illustrate the device for implementing the functions of the terminal device, and does not constitute a limitation on the solution of this embodiment.

本申请实施例中的网络设备可以包括用于与终端设备通信的设备,该网络设备可以包括接入网设备或无线接入网设备,如网络设备可以是基站。本申请实施例中的网络设备可以包括将终端设备接入到无线网络的无线接入网(radio access network,RAN)节点(或设备)。The network device in this application embodiment may include a device for communicating with a terminal device. This network device may include an access network device or a radio access network device, such as a base station. The network device in this application embodiment may also include a radio access network (RAN) node (or device) for connecting the terminal device to a wireless network.

基站可以广义的覆盖如下中的各种名称,或与如下名称进行替换,比如:节点B(NodeB)、演进型基站(evolved NodeB,eNB)、下一代基站(next generation NodeB,gNB)、中继站、接入点、传输点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、主站、辅站、多制式无线(motor slide retainer,MSR)节点、家庭基站、网络控制器、接入节点、无线节点、接入点(access point,AP)、传输节点、收发节点、基带单元(baseband unit,BBU)、射频拉远单元(remote radio unit,RRU)、有源天线单元(active antenna unit,AAU)、射频头(remote radio head,RRH)、中心单元(central unit,CU)、分布式单元(distributed unit,DU)、射电单元(radio unit,RU)、定位节点、RAN智能控制器(RAN intelligent controller,RIC)等。A base station can broadly encompass, or be replaced by, various names including: NodeB, evolved NodeB (eNB), next-generation NodeB (gNB), relay station, access point, transmitting and receiving point (TRP), transmitting point (TP), master station, auxiliary station, motor slide retainer (MSR) node, home base station, network controller, access node, wireless node, and access point. The network includes points (APs), transmission nodes, transceiver nodes, baseband units (BBUs), remote radio units (RRUs), active antenna units (AAUs), remote radio heads (RRHs), central units (CUs), distributed units (DUs), radio units (RUs), positioning nodes, and RAN intelligent controllers (RICs).

基站还可以是宏基站、微基站、中继节点、施主节点或类似物,或其组合。基站还可以指用于设置于前述设备或装置内的通信模块、调制解调器或芯片。基站还可以是移动交换中心以及D2D、V2X、M2M通信中承担基站功能的设备、未来网络中的网络侧设备、未来通信系统中承担基站功能的设备等。基站可以支持相同或不同接入技术的网络。A base station can also be a macro base station, micro base station, relay node, donor node, or similar entity, or a combination thereof. A base station can also refer to a communication module, modem, or chip installed within the aforementioned equipment or apparatus. A base station can also be a mobile switching center and equipment performing base station functions in D2D, V2X, and M2M communications, network-side equipment in future networks, or equipment performing base station functions in future communication systems. A base station can support networks using the same or different access technologies.

可选的,RAN节点还可以是服务器,可穿戴设备,车辆或车载设备等。例如,车辆外联(vehicle to everything,V2X)技术中的接入网设备可以为路侧单元(road side unit,RSU)。本申请的实施例对网络设备所采用的具体技术和具体设备形态不做限定。Optionally, RAN nodes can also be servers, wearable devices, vehicles, or in-vehicle equipment. For example, the access network equipment in vehicle-to-everything (V2X) technology can be a roadside unit (RSU). The embodiments of this application do not limit the specific technology or equipment form used in the network devices.

基站可以是固定的,也可以是移动的。例如,直升机或无人机可以被配置成充当移动基站,一个或多个小区可以根据该移动基站的位置移动。在其他示例中,直升机或无人机可以被配置成用作与另一基站通信的设备。Base stations can be fixed or mobile. For example, a helicopter or drone can be configured to act as a mobile base station, and one or more cells can move depending on the location of the mobile base station. In other examples, a helicopter or drone can be configured as a device to communicate with another base station.

在一些部署中,网络设备可以为包括CU或DU或包括CU和DU的设备、或者控制面CU节点(中央单元控制面(central unit-control plane,CU-CP))和用户面CU节点(中央单元用户面(central unit-user plane,CU-UP))以及DU节点的设备。例如,网络设备包括gNB-CU-CP、gNB-CU-UP和gNB-DU。In some deployments, network devices can be devices that include CUs or DUs, or devices that include both CUs and DUs, or devices with control plane CU nodes (central unit-control plane (CU-CP)) and user plane CU nodes (central unit-user plane (CU-UP)) and DU nodes. For example, network devices include gNB-CU-CP, gNB-CU-UP, and gNB-DU.

在一些部署中,由多个RAN节点协作协助终端实现无线接入,不同RAN节点分别实现基站的部分功能。例如,RAN节点可以是CU,DU,CU-CP,CU-UP,或者RU等。CU和DU可以是单独设置,或者也可以包括在同一个网元中,例如BBU中。RU可以包括在射频设备或者射频单元中,例如包括在RRU、AAU或RRH中。In some deployments, multiple RAN nodes collaborate to assist terminals in achieving wireless access, with different RAN nodes each implementing some of the base station's functions. For example, RAN nodes can be CUs, DUs, CU-CPs, CU-UPs, or RUs. CUs and DUs can be configured separately or included in the same network element, such as a BBU. RUs can be included in radio frequency equipment or radio frequency units, such as RRUs, AAUs, or RRHs.

RAN节点可以支持一种或多种类型的前传接口,不同的前传接口分别对应具有不同功能的DU和RU。RAN nodes can support one or more types of fronthaul interfaces, and different fronthaul interfaces correspond to DU and RU with different functions.

若DU和RU之间的前传接口为通用公共无线电接口(common public radio interface,CPRI),DU被配置用于实现基带功能中的一项或多项,RU被配置用于实现射频功能中的一项或多项。If the fronthaul interface between the DU and RU is a common public radio interface (CPRI), the DU is configured to implement one or more baseband functions, and the RU is configured to implement one or more radio frequency functions.

若DU和RU之间的前传接口为另一种接口,其相对于CPRI,将下行和/或上行的部分基带功能,比如,针对下行,预编码(precoding),数字波束赋形(beamforming,BF),或快速傅立叶反变换(inverse fast fourier transform,IFFT)/添加循环前缀(cyclic prefix,CP)中的一项或多项,从DU中移至RU中实现,针对上行,数字波束赋形(beamforming,BF),或快速傅立叶变换(fast fourier transform,FFT)/去除循环前缀(cyclic prefix,CP)中的一项或多项,从DU中移至RU中实现。If the fronthaul interface between DU and RU is a different interface, relative to CPRI, some baseband functions for downlink and/or uplink, such as, for downlink, precoding, digital beamforming (BF), or one or more of inverse fast fourier transform (IFFT)/cyclic prefix addition (CP), are moved from DU to RU; and for uplink, digital beamforming (BF), or one or more of fast fourier transform (FFT)/cyclic prefix removal (CP) are moved from DU to RU.

一种可能的实现方式,该接口可以为增强型通用公共无线电接口(enhanced common public radio interface,eCPRI)。在eCPRI架构下,DU和RU之间的切分方式不同,对应不同类型(category,Cat)的eCPRI,比如eCPRI Cat A,B,C,D,E,F。One possible implementation is that the interface can be an enhanced common public radio interface (eCPRI). In the eCPRI architecture, the segmentation between DU and RU differs, corresponding to different categories (Cat) of eCPRI, such as eCPRI Cat A, B, C, D, E, and F.

以eCPRI Cat A为例,对于下行传输,以层映射为切分,DU被配置用于实现层映射及之前的一项或多项功能(即编码、速率匹配,加扰,调制,层映射中的一项或多项),而层映射之后的其他功能(例如,资源元素(resource element,RE)映射,数字波束赋形(beamforming,BF),或快速傅立叶反变换(inverse fast Fourier transform,IFFT)/添加循环前缀(cyclic prefix,CP)中的一项或多项)移至RU中实现。对于上行传输,以解RE映射为切分,DU被配置用于实现解映射及之前的一项或多项功能(即解码、解速率匹配、解扰、解调、离散傅里叶逆变换(inverse discrete Fourier transform,IDFT)、信道均衡以及解RE映射中的一项或多项功能),而解映射之后的其他功能(例如,数字BF或快速傅里叶变换(fast Fourier transform,FFT)/去CP中的一项或多项)移至RU中实现。可以理解的是,关于各种类型的eCPRI所对应的DU和RU的功能描述,可以参考eCPRI协议,在此不予赘述。Taking eCPRI Cat A as an example, for downlink transmission, the DU is configured to implement one or more functions before and after the layer mapping (i.e., coding, rate matching, scrambling, modulation, layer mapping). Other functions after the layer mapping (e.g., resource element (RE) mapping, digital beamforming (BF), or one or more of inverse fast Fourier transform (IFFT)/adding a cyclic prefix (CP)) are moved to the RU for implementation. For uplink transmission, the de-RE mapping is used as the dividing line. The DU is configured to implement one or more functions preceding de-mapping (i.e., decoding, rate matching de-matching, descrambling, demodulation, inverse discrete Fourier transform (IDFT), channel equalization, and one or more functions from de-RE mapping). Other functions following de-mapping (e.g., one or more functions from digital BF or fast Fourier transform (FFT)/CP removal) are moved to the RU. It is understood that descriptions of the functions of the DU and RU corresponding to various types of eCPRI can be found in the eCPRI protocol and will not be elaborated upon here.

一种可能的设计中,BBU中用于实现基带功能的处理单元称为基带高层(base band high,BBH)单元,RRU/AAU/RRH中用于实现基带功能的处理单元称为基带低层(base band low,BBL)单元。In one possible design, the processing unit in the BBU used to implement baseband functions is called the baseband high (BBH) unit, and the processing unit in the RRU/AAU/RRH used to implement baseband functions is called the baseband low (BBL) unit.

在不同通信系统中,CU(或CU-CP和CU-UP)、DU或RU也可以有不同的名称,但是本领域的技术人员可以理解其含义。例如,在开放式RAN(open RAN,ORAN)系统中,CU也可以称为O-CU(开放式CU),DU也可以称为O-DU,CU-CP也可以称为O-CU-CP,CU-UP也可以称为O-CU-UP,RU也可以称为O-RU。本申请中的CU(或CU-CP、CU-UP)、DU和RU中的任一单元,可以是通过软件模块、硬件模块、或者软件模块与硬件模块结合来实现。In different communication systems, CU (or CU-CP and CU-UP), DU, or RU may have different names, but those skilled in the art will understand their meaning. For example, in an open RAN (ORAN) system, CU can also be called O-CU (open CU), DU can also be called O-DU, CU-CP can also be called O-CU-CP, CU-UP can also be called O-CU-UP, and RU can also be called O-RU. Any of the units among CU (or CU-CP, CU-UP), DU, and RU in this application can be implemented through software modules, hardware modules, or a combination of software modules and hardware modules.

本申请实施例中,用于实现网络设备的功能的装置可以是网络设备;也可以是能够支持网络设备实现该功能的装置,例如芯片系统、硬件电路、软件模块、或硬件电路加软件模块。该装置可以被安装在网络设备中或者和网络设备匹配使用。在本申请实施例中仅以用于实现网络设备的功能的装置为网络设备为例进行说明,不对本申请实施例的方案构成限定。In this embodiment, the apparatus for implementing the functions of a network device can be a network device itself; it can also be an apparatus capable of supporting the network device in implementing those functions, such as a chip system, hardware circuit, software module, or a hardware circuit plus a software module. This apparatus can be installed in the network device or used in conjunction with the network device. In this embodiment, the example of a network device being used to implement the functions of a network device is provided only and does not constitute a limitation on the solutions described in this embodiment.

网络设备和/或终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上;还可以部署在空中的飞机、气球和卫星上。本申请实施例中对网络设备和终端设备所处的场景不做限定。Network devices and/or terminal devices can be deployed on land, including indoors or outdoors, handheld or vehicle-mounted; they can also be deployed on water; and they can also be deployed in the air on airplanes, balloons, and satellites. This application does not limit the scenario in which the network devices and terminal devices are located.

此外,终端设备和网络设备可以是硬件设备,也可以是在专用硬件上运行的软件功能,通用硬件上运行的软件功能,比如,是平台(例如,云平台)上实例化的虚拟化功能,又或者,是包括专用或通用硬件设备和软件功能的实体,本申请对终端设备和网络设备的具体形态不作限定。Furthermore, terminal devices and network devices can be hardware devices, software functions running on dedicated hardware, software functions running on general-purpose hardware, such as virtualization functions instantiated on a platform (e.g., a cloud platform), or entities that include dedicated or general-purpose hardware devices and software functions. This application does not limit the specific form of terminal devices and network devices.

在无线通信网络(如移动通信网络)中,网络支持的业务越来越多样,需要满足的需求越来越多样。例如,网络需要能够支持超高速率、超低时延、超大连接。该特点使得网络规划、网络配置、资源调度越来越复杂。由于网络的功能越来越强大,例如支持的频谱越来越高、支持高阶多入多出(multiple input multiple output,MIMO)技术、支持波束赋形、和/或支持波束管理等新技术,使得网络节能成为了热门研究课题。这些新需求、新场景和新特性给网络规划、运维和高效运营带来了前所未有的挑战。为了迎接该挑战,可以将AI技术引入无线通信网络中,实现网络智能化。In wireless communication networks (such as mobile communication networks), the services supported by the networks are becoming increasingly diverse, and the requirements they need to meet are also becoming more varied. For example, networks need to support ultra-high speeds, ultra-low latency, and massive connectivity. This characteristic makes network planning, network configuration, and resource scheduling increasingly complex. As network functions become more powerful, such as supporting higher spectrum levels, supporting higher-order multiple-input multiple-output (MIMO) technologies, supporting beamforming, and/or supporting beam management and other new technologies, network energy saving has become a hot research topic. These new requirements, new scenarios, and new characteristics bring unprecedented challenges to network planning, operation, and efficient operation. To meet these challenges, AI technology can be introduced into wireless communication networks to achieve network intelligence.

为了在无线网络中支持AI技术,网络中还可能引入AI节点(也可以称为AI实体)。To support AI technology in wireless networks, AI nodes (also known as AI entities) may be introduced into the network.

可选地,AI实体可以部署于该通信系统中的如下位置中的一项或多项:接入网络设备、终端设备、或核心网设备等,或者,AI实体也可单独部署,例如,部署于上述任一项设备之外的位置,比如,OTT系统的主机或云端服务器中。AI实体可以与通信系统中的其它设备通信,其它设备例如可以为以下中的一项或多项:网络设备,终端设备,或,核心网的网元等。基于该AI实体所服务的对象,AI实体可以包括网络设备侧的AI实体,终端设备侧的AI实体,或核心网侧的AI实体。Optionally, the AI entity can be deployed in one or more of the following locations within the communication system: access network devices, terminal devices, or core network devices, or the AI entity can be deployed independently, for example, in a location other than any of the aforementioned devices, such as the host or cloud server of an OTT system. The AI entity can communicate with other devices in the communication system, which can be one or more of the following: network devices, terminal devices, or network elements of the core network. Based on the object served by the AI entity, the AI entity can include an AI entity on the network device side, an AI entity on the terminal device side, or an AI entity on the core network side.

可以理解,本申请对于AI实体的数量不予限制。例如,当有多个AI实体时,多个AI实体可以基于功能进行划分,如不同的AI实体负责不同的功能。It is understood that this application does not limit the number of AI entities. For example, when there are multiple AI entities, they can be divided based on function, such as different AI entities being responsible for different functions.

还可以理解,AI实体可以是各自独立的设备,也可以集成于同一设备中实现不同的功能,或者可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能,本申请对于上述AI实体的具体形态不作限定。It can also be understood that AI entities can be independent devices, or they can be integrated into the same device to achieve different functions. Alternatively, they can be network components in hardware devices, software functions running on dedicated hardware, or virtualization functions instantiated on a platform (e.g., a cloud platform). This application does not limit the specific form of the aforementioned AI entities.

AI实体可以为AI网元或AI模块。AI实体用以实现相应的AI功能。不同网元中部署的AI模块可以相同或不同。AI实体中的AI模型根据不同的参数配置,AI实体可以实现不同的功能。AI实体中的AI模型可以是基于以下一项或多项参数配置的:结构参数(例如神经网络层数、神经网络宽度、层间的连接关系、神经元的权值、神经元的激活函数、或激活函数中的偏置中的至少一项)、输入参数(例如输入参数的类型和/或输入参数的维度)、或输出参数(例如输出参数的类型和/或输出参数的维度)。其中,激活函数中的偏置还可以称为神经网络的偏置。AI entities can be AI network elements or AI modules. AI entities are used to implement corresponding AI functions. AI modules deployed in different network elements can be the same or different. Depending on the different parameter configurations, the AI model within an AI entity can achieve different functions. The AI model within an AI entity can be configured based on one or more of the following parameters: structural parameters (e.g., at least one of the following: number of neural network layers, neural network width, inter-layer connections, neuron weights, neuron activation function, or biases in the activation function), input parameters (e.g., the type and/or dimension of the input parameters), or output parameters (e.g., the type and/or dimension of the output parameters). The biases in the activation function can also be referred to as the biases of the neural network.

一个AI实体可以具有一个或多个模型。不同模型的学习过程、训练过程、或推理过程可以部署在不同的实体或设备中,或者可以部署在相同的实体或设备中。An AI entity can have one or more models. The learning, training, or inference processes of different models can be deployed in different entities or devices, or they can be deployed in the same entity or device.

图1是本申请实施例适用的一种应用框架的示意图。如图1所示,装置之间通过接口(例如NG,Xn),或空口相连。这些装置节点,例如:核心网设备、接入网节点(RAN节点)、终端或OAM中的一个或多个设备中设置有一个或多个AI模块(为清楚起见,图1中仅示出1个)。接入网节点可以作为单独的RAN节点,也可以包括多个RAN节点,例如,包括CU和DU。CU和/或DU也可以设置一个或多个AI模块。可选的,CU还可以被拆分为CU-CP和CU-UP。CU-CP和/或CU-UP中设置有一个或多个AI模型。Figure 1 is a schematic diagram of an application framework applicable to an embodiment of this application. As shown in Figure 1, devices are connected via interfaces (e.g., NG, Xn) or over-the-air interfaces. These device nodes, such as core network devices, access network nodes (RAN nodes), terminals, or one or more devices in OAM, are equipped with one or more AI modules (only one is shown in Figure 1 for clarity). An access network node can be a single RAN node or can include multiple RAN nodes, for example, including CU and DU. CU and/or DU can also be equipped with one or more AI modules. Optionally, a CU can also be split into CU-CP and CU-UP. One or more AI models are configured in CU-CP and/or CU-UP.

该AI模块用以实现相应的AI功能。不同装置中部署的AI模块可以相同或不同。AI模块的模型根据不同的参数配置,AI模块可以实现不同的功能。AI模块的模型可以是基于以下一项或多项参数配置的:结构参数(例如,神经网络层数、神经网络宽度、层间的连接关系、神经元的权值、神经元的激活函数、或激活函数中的偏置中的至少一项)、输入参数(例如输入参数的类型和/或输入参数的维度)、或输出参数(例如输出参数的类型和/或输出参数的维度)。激活函数中的偏置还可以称为神经网络的偏置。This AI module is used to implement corresponding AI functions. AI modules deployed in different devices can be the same or different. Depending on the parameter configuration, the AI module can achieve different functions. The AI module model can be configured based on one or more of the following parameters: structural parameters (e.g., at least one of the following: number of neural network layers, neural network width, inter-layer connections, neuron weights, neuron activation function, or biases in the activation function), input parameters (e.g., the type and/or dimension of the input parameters), or output parameters (e.g., the type and/or dimension of the output parameters). The biases in the activation function can also be referred to as the biases of the neural network.

一个AI模块可以具有一个或多个模型。一个模型可以推理得到一个输出,该输出包括一个参数或者多个参数。不同模型的学习过程、训练过程、或推理过程可以部署在不同的节点或设备中,或者可以部署在相同的节点或设备中。An AI module can have one or more models. A model can infer an output, which includes one or more parameters. The learning, training, or inference processes of different models can be deployed on different nodes or devices, or they can be deployed on the same node or device.

图2是本申请实施例适用的另一种应用框架的示意图。如图2所示,通信系统包括RIC。例如,RIC可以是图1中示出的RAN节点中的AI模块,用于实现AI相关的功能。RIC包括近实时RIC(near-real time RIC,near-RT RIC)和非实时RIC(non-real time RIC,Non-RT RIC)。非实时RIC主要处理非实时的信息,比如,对时延不敏感的数据,该数据的时延可以为秒级。实时RIC主要处理近实时的信息,比如,对时延相对敏感的数据,该数据的时延为数十毫秒级。Figure 2 is a schematic diagram of another application framework applicable to the embodiments of this application. As shown in Figure 2, the communication system includes a Resource Interchange (RIC). For example, the RIC can be the AI module in the RAN node shown in Figure 1, used to implement AI-related functions. RICs include near-real-time RICs (near-RT RICs) and non-real-time RICs (non-RT RICs). Non-real-time RICs mainly process non-real-time information, such as data that is not sensitive to latency, with latency in the order of seconds. Real-time RICs mainly process near-real-time information, such as data that is relatively sensitive to latency, with latency in the order of tens of milliseconds.

近实时RIC用于进行模型训练和推理。例如,用于训练AI模型,利用该AI模型进行推理。近实时RIC可以从RAN节点(例如CU、CU-CP、CU-UP、DU和/或RU)和/或终端获得网络侧和/或终端侧的信息。该信息可以作为训练数据或者推理数据。Near real-time RICs are used for model training and inference. For example, they are used to train AI models and then use those models for inference. Near real-time RICs can obtain network-side and/or terminal-side information from RAN nodes (e.g., CU, CU-CP, CU-UP, DU, and/or RU) and/or terminals. This information can be used as training data or inference data.

可选的,近实时RIC可以将推理结果递交给RAN节点和/或终端。Optionally, near real-time RIC can deliver inference results to RAN nodes and/or terminals.

可选的,CU和DU之间,和/或DU和RU之间可以交互推理结果。例如近实时RIC将推理结果递交给DU,DU将其发给RU。Optionally, inference results can be exchanged between CU and DU, and/or between DU and RU. For example, near real-time RIC submits inference results to DU, and DU sends them to RU.

非实时RIC也用于进行模型训练和推理。例如,用于训练AI模型,利用该模型进行推理。非实时RIC可以从RAN节点(例如CU、CU-CP、CU-UP、DU和/或RU)和/或终端获得网络侧和/或终端侧的信息。该信息可以作为训练数据或者推理数据,推理结果可以被递交给RAN节点和/或终端。Non-real-time RICs are also used for model training and inference. For example, they are used to train AI models and then use those models for inference. Non-real-time RICs can obtain network-side and/or terminal-side information from RAN nodes (e.g., CU, CU-CP, CU-UP, DU, and/or RU) and/or terminals. This information can be used as training data or inference data, and the inference results can be delivered to the RAN nodes and/or terminals.

可选的,CU和DU之间,和/或,DU和RU之间可以交互推理结果,例如非实时RIC将推理结果递交给DU,由DU将其发给RU。Optionally, inference results can be exchanged between CU and DU, and/or between DU and RU. For example, a non-real-time RIC can submit inference results to DU, which in turn can send them to RU.

近实时RIC和非实时RIC也可以分别作为一个装置单独设置。可选的,近实时RIC和非实时RIC也可以作为其他设备的一部分,例如,近实时RIC设置在RAN节点中(例如,CU,DU中),而非实时RIC设置在OAM中、云服务器中、核心网设备或者其他设备中。Near real-time RICs and non-real-time RICs can also be configured as separate devices. Alternatively, near real-time RICs and non-real-time RICs can also be part of other devices. For example, near real-time RICs can be configured in RAN nodes (e.g., CU, DU), while non-real-time RICs can be configured in OAM, cloud servers, core network devices, or other devices.

图3是本申请实施例适用的一种通信系统的示意图。如图3所示,该通信系统可以包括至少一个网络设备,例如,网络设备110;通信系统100还可以包括至少一个终端设备,例如,终端设备120和终端设备130。网络设备110与终端设备(如终端设备120和终端设备130)可通过无线链路通信。该通信系统中的各通信设备之间,例如,网络设备110与终端设备120之间,可通过多天线技术通信。Figure 3 is a schematic diagram of a communication system applicable to an embodiment of this application. As shown in Figure 3, the communication system may include at least one network device, such as network device 110; the communication system 100 may also include at least one terminal device, such as terminal device 120 and terminal device 130. Network device 110 and terminal devices (such as terminal devices 120 and 130) can communicate via a wireless link. The communication devices in this communication system, for example, network device 110 and terminal device 120, can communicate via multi-antenna technology.

图4是本申请实施例适用的另一种通信系统的示意图。相较于图3所述的通信系统而言,图4所示的通信系统还包括AI装置140,其用于执行AI相关的操作,例如,构建训练数据集或训练AI模型等。其中,AI装置140为前述AI节点或AI实体。Figure 4 is a schematic diagram of another communication system applicable to the embodiments of this application. Compared with the communication system described in Figure 3, the communication system shown in Figure 4 further includes an AI device 140, which is used to perform AI-related operations, such as building a training dataset or training an AI model. The AI device 140 is the aforementioned AI node or AI entity.

一种可能的实现方式,网络设备110将与AI模型的训练相关的数据发送给AI装置140,由AI装置140构建训练数据集,并训练AI模型。例如,与AI模型的训练相关的数据可以包括终端设备上报的数据。AI装置140将AI模型相关的操作的结果发送至网络设备110,并通过网络设备110转发至终端设备。例如,AI模型相关的操作的结果包括以下至少一项:已完成训练的AI模型、模型的评估结果或测试结果等。示例性地,已完成训练的AI模型的一部分部署于网络设备110,另一部分部署于终端设备。可替换地,已完成训练的AI模型部署于网络设备110。或者,已完成训练的AI模型部署于终端设备。In one possible implementation, network device 110 sends data related to the training of the AI model to AI device 140, whereby AI device 140 constructs a training dataset and trains the AI model. For example, the data related to the training of the AI model may include data reported by the terminal device. AI device 140 sends the results of operations related to the AI model to network device 110, which then forwards them to the terminal device. For example, the results of operations related to the AI model may include at least one of the following: a trained AI model, model evaluation results, or test results. Exemplarily, a portion of the trained AI model is deployed on network device 110, and another portion is deployed on the terminal device. Alternatively, the trained AI model is deployed on network device 110. Or, the trained AI model is deployed on the terminal device.

应理解,图4仅以AI装置140与网络设备110直接相连为例说明,在其他场景中,AI装置140也可以与终端设备相连;AI装置140也可以同时与网络设备110和终端设备相连;AI装置140还可以通过第三方装置与网络设备110相连等。因此,本申请不限定AI装置与其他装置之间的连接关系。It should be understood that Figure 4 only illustrates the example of AI device 140 being directly connected to network device 110. In other scenarios, AI device 140 can also be connected to terminal devices; AI device 140 can also be connected to both network device 110 and terminal devices simultaneously; AI device 140 can also be connected to network device 110 through third-party devices, etc. Therefore, this application does not limit the connection relationship between AI device and other devices.

AI装置140也可以作为一个模块设置于网络设备和/或终端设备之中,例如,设置于图3所示的网络设备110或终端设备之中。The AI device 140 can also be installed as a module in network devices and/or terminal devices, for example, in network device 110 or terminal device as shown in FIG3.

需要说明的是,图3和图4仅为便于理解而示例的简化示意图,例如,通信系统还可以包括其它设备,如还可以包括无线中继设备和/或无线回传设备等,图3和图4中未予以画出。在实际应用中,该通信系统可以包括多个网络设备(如网络设备110和网络设备150(图3未显示)),也可以包括多个终端设备。因此,本申请对通信系统中包括的网络设备和终端设备的数量不做限定。It should be noted that Figures 3 and 4 are simplified schematic diagrams for ease of understanding. For example, the communication system may also include other devices, such as wireless relay devices and/or wireless backhaul devices, which are not shown in Figures 3 and 4. In practical applications, the communication system may include multiple network devices (such as network device 110 and network device 150 (not shown in Figure 3)) and multiple terminal devices. Therefore, this application does not limit the number of network devices and terminal devices included in the communication system.

接着对本申请涉及的部分技术概念作简短的描述。Next, a brief description of some of the technical concepts involved in this application will be given.

(1)AI模型:(1) AI Model:

AI模型为能实现AI功能的算法或者计算机程序,AI模型表征了模型的输入和输出之间的映射关系。AI模型可以理解为将一定维度的输入映射到一定维度的输出的函数模型,其模型参数通过机器学习训练得到。例如,f(x)=a*x2+b是一个二次函数模型,该模型可以视为一个AI模型,a和b对应AI模型的参数,可以通过机器学习训练得到。AI模型也可以称为模型或AI功能或功能。一个AI功能可以对应于一个或多个AI模型。An AI model is an algorithm or computer program that enables AI functionality. It represents the mapping relationship between the model's input and output. An AI model can be understood as a function model that maps an input of a certain dimension to an output of a certain dimension, and its parameters are obtained through machine learning training. For example, f(x) = a* + b is a quadratic function model, which can be considered an AI model. a and b correspond to the parameters of the AI model, which can be obtained through machine learning training. An AI model can also be called a model, an AI function, or a function. One AI function can correspond to one or more AI models.

AI模型的类型可以是神经网络、线性回归模型、决策树模型、支持向量机(support vector machine,SVM)、贝叶斯网络、Q学习模型或者其他机器学习(machine learning,ML)模型。AI models can be neural networks, linear regression models, decision tree models, support vector machines (SVM), Bayesian networks, Q-learning models, or other machine learning (ML) models.

(2)双端模型:(2) Two-ended model:

双端模型也可以称为双边模型、协作模型、对偶模型或双端(two-side)模型等。双端模型指的是由多个子模型组合在一起构成的一个模型。构成该模型的多个子模型需要相互匹配。该多个子模型可以部署于不同的节点中。A two-sided model, also known as a bilateral model, collaborative model, dual model, or two-side model, refers to a model composed of multiple sub-models. These sub-models need to be mutually compatible and can be deployed on different nodes.

本申请实施例中涉及用于压缩CSI的编码器和用于恢复CSI的解码器。编码器与解码器匹配使用,可以理解编码器和解码器为配套的AI模型。一个编码器可以包括一个或多个AI模型,该编码器匹配的解码器中也包括一个或多个AI模型,匹配使用的编码器和解码器中包括的AI模型数量相同且一一对应。其中,编码器还可以包括量化模块,该量化模块可以用于对编码器中的AI模型的输出进行量化处理。解码器可以包括反量化模块,该反量化模块可以用于对接收到的信道信息的反馈信息进行反量化处理,以得到解码器中的AI模型的输入。反量化也可以替换成为解量化。This application embodiment relates to an encoder for compressing CSI and a decoder for recovering CSI. The encoder and decoder are used in conjunction, and can be understood as paired AI models. An encoder may include one or more AI models, and the decoder matched with the encoder also includes one or more AI models. The number of AI models included in the matched encoder and decoder are the same and correspond one-to-one. The encoder may also include a quantization module, which can be used to quantize the output of the AI model in the encoder. The decoder may include an inverse quantization module, which can be used to inverse quantize the feedback information of the received channel information to obtain the input of the AI model in the decoder. Inverse quantization can also be replaced by dequantization.

一种可能的设计中,一套匹配使用的编码器(encoder)和解码器(decoder)可以为同一个自编码器(auto-encoders,AE)中的两个部分。编码器和解码器分别部署于不同的节点的AE模型是一种典型的双边模型。AE模型的编码器和解码器通常是共同训练的编码器与解码器匹配使用。自编码器是一种无监督学习的神经网络,它的特点是将输入数据作为标签数据,因此自编码器也可以理解为自监督学习的神经网络。自编码器可以用于数据的压缩和恢复。示例性地,自编码器中的编码器可以对数据A进行压缩(编码)处理,得到数据B;自编码器中的解码器可以对数据B进行解压缩(解码)处理,恢复出数据A。或者可以理解为,解码器是编码器的逆操作。关于编码器和解码器的描述可以参见图5。In one possible design, a set of matched encoders and decoders can be two parts of the same autoencoder (AE). An AE model where the encoder and decoder are deployed on different nodes is a typical bilateral model. In other AE models, the encoder and decoder are usually co-trained and used in combination. An autoencoder is an unsupervised learning neural network that uses input data as labeled data; therefore, it can also be understood as a self-supervised learning neural network. Autoencoders can be used for data compression and reconstruction. For example, the encoder in an autoencoder can compress (encode) data A to obtain data B; the decoder in an autoencoder can decompress (decode) data B to recover data A. Alternatively, the decoder can be understood as the inverse operation of the encoder. A description of the encoder and decoder can be found in Figure 5.

图5是编码器和解码器之间的关系示意图。如图5所示,编码器对输入V进行处理,以得到处理后的结果z,解码器能够将编码器的输出z再解码为期望的输出V’。Figure 5 is a schematic diagram of the relationship between the encoder and the decoder. As shown in Figure 5, the encoder processes the input V to obtain the processed result z, and the decoder can decode the encoder's output z into the desired output V'.

本申请实施例中的自编码模型可以包括部署于终端设备侧的编码器和部署于网络设备侧的解码器,或者,部署于终端设备侧的编码器和部署于另一终端设备侧的解码器,或者,部署于网络设备侧的编码器和部署于另一网络设备侧的解码器。The self-encoding model in this application embodiment may include an encoder deployed on the terminal device side and a decoder deployed on the network device side, or an encoder deployed on the terminal device side and a decoder deployed on another terminal device side, or an encoder deployed on the network device side and a decoder deployed on another network device side.

(3)神经网络(neural network,NN):(3) Neural Network (NN):

神经网络是AI或机器学习的一种具体实现形式。根据通用近似定理,神经网络理论上可以逼近任意连续函数,从而使得神经网络具备学习任意映射的能力。Neural networks are a specific implementation of AI or machine learning. According to the general approximation theorem, neural networks can theoretically approximate any continuous function, thus enabling them to learn arbitrary mappings.

神经网络可以是由神经单元组成的,神经单元可以是指以xs和截距1为输入的运算单元。神经网络是将许多个上述单一的神经单元联结在一起形成的网络,即一个神经单元的输出可以是另一个神经单元的输入。每个神经单元的输入可以与前一层的局部接受域相连,来提取局部接受域的特征,局部接受域可以是由若干个神经单元组成的区域。A neural network can be composed of neural units, which can be defined as computational units that take x, s , and an intercept of 1 as inputs. A neural network is a network formed by connecting many of these individual neural units together; that is, the output of one neural unit can be the input of another. The input of each neural unit can be connected to the local receptive field of the previous layer to extract features from that local receptive field, which can be a region composed of several neural units.

以AI模型的类型为神经网络为例,本申请涉及的AI模型可以为深度神经网络(deep neural network,DNN)。根据网络的构建方式,DNN可以包括前馈神经网络(feedforward neural network,FNN)、卷积神经网络(convolutional neural networks,CNN)和递归神经网络(recurrent neural network,RNN)等。Taking neural networks as an example, the AI model involved in this application can be a deep neural network (DNN). Depending on the construction method of the network, DNN can include feedforward neural networks (FNN), convolutional neural networks (CNN), and recurrent neural networks (RNN), etc.

CNN是一种专门来处理具有类似网格结构的数据的神经网络。例如,时间序列数据(时间轴离散采样)和图像数据(二维离散采样)都可以认为是类似网格结构的数据。CNN并不一次性利用全部的输入信息做运算,而是采用一个固定大小的窗截取部分信息做卷积运算,这就大大降低了模型参数的计算量。另外根据窗截取的信息类型的不同(如同一副图中的人和物为不同类型信息),每个窗可以采用不同的卷积核运算,这使得CNN能更好的提取输入数据的特征。CNNs are neural networks specifically designed to process data with a grid-like structure. For example, time-series data (discrete sampling along the time axis) and image data (two-dimensional discrete sampling) can both be considered grid-like data. CNNs do not use all the input information at once for computation; instead, they use a fixed-size window to extract a portion of the information for convolution operations, which significantly reduces the computational cost of model parameters. Furthermore, depending on the type of information extracted by the window (such as people and objects in an image representing different types of information), each window can use different convolution kernels, allowing CNNs to better extract features from the input data.

RNN是一类利用反馈时间序列信息的DNN网络。它的输入包括当前时刻的新的输入值和自身在前一时刻的输出值。RNN适合获取在时间上具有相关性的序列特征,特别适用于语音识别、信道编解码等应用。Recurrent Neural Networks (RNNs) are a type of distributed neural network (DNN) that utilizes feedback time-series information. Their input includes the current input value and their own output value from the previous time step. RNNs are well-suited for acquiring temporally correlated sequence features, and are particularly applicable to applications such as speech recognition and channel encoding/decoding.

FNN网络的特点为相邻层的神经元之间两两完全相连,这使得FNN通常需要大量的存储空间、导致较高的计算复杂度。The characteristic of FNN networks is that neurons in adjacent layers are completely connected to each other, which makes FNNs typically require a large amount of storage space and result in high computational complexity.

上述FNN、CNN、RNN为都是以神经元为基础而构造的。如前所述,每个神经元都对其输入值做加权求和运算,并加权求和结果通过一个非线性函数产生输出。神经网络中神经元加权求和运算的权值以及非线性函数称作神经网络的参数。一个神经网络所有神经元的参数构成这个神经网络的参数。The FNN, CNN, and RNN mentioned above are all constructed based on neurons. As mentioned earlier, each neuron performs a weighted summation operation on its input values, and the result of the weighted summation is used to generate the output through a nonlinear function. The weights of the weighted summation operation of neurons in a neural network and the nonlinear function are called the parameters of the neural network. The parameters of all neurons in a neural network constitute the parameters of that neural network.

(4)数据集:(4) Dataset:

数据集指的是机器学习中用于模型训练、验证和测试的数据,数据的数量和质量将影响到机器学习的效果。A dataset refers to the data used for model training, validation, and testing in machine learning. The quantity and quality of the data will affect the effectiveness of machine learning.

在机器学习领域,真值(ground truth)通常指的是被认为是准确的数据或真实的数据。In the field of machine learning, ground truth usually refers to data that is considered accurate or real.

训练数据集用于AI模型的训练,训练数据集可以包括AI模型的输入,或者包括AI模型的输入和目标输出。其中,训练数据集包括一个或多个训练数据,训练数据可以包括输入至AI模型的训练样本,也可以包括AI模型的目标输出。其中,目标输出也可以被称为标签、样本标签或标签样本。标签即为真值。Training datasets are used to train AI models. A training dataset can include the input to the AI model, or it can include both the input and the target output of the AI model. Specifically, a training dataset includes one or more training data points, which can include training samples input to the AI model or the target output of the AI model. The target output can also be referred to as a label, sample label, or labeled sample. The label is the ground truth value.

在通信领域,训练数据集可以包括通过仿真平台收集的仿真数据,也可以包括实验场景收集的实验数据,或者,也可以包括在实际的通信网络中收集的实测数据。由于数据产生的地理环境和信道条件存在差异,例如,室内、室外、移动速度、频段或天线配置等存在差异,在获取数据时,可以对收集到数据进行分类。例如,将信道传播环境以及天线配置相同的数据归为一类。In the field of communications, training datasets can include simulation data collected through simulation platforms, experimental data collected from experimental scenarios, or measured data collected in actual communication networks. Because the geographical environment and channel conditions where the data is generated vary—for example, indoor/outdoor conditions, movement speed, frequency bands, or antenna configurations—the collected data can be categorized during acquisition. For instance, data with the same channel propagation environment and antenna configuration can be grouped together.

模型训练本质上就是从训练数据中学习它的某些特征,在训练AI模型(如神经网络模型)的过程中,因为希望AI模型的输出尽可能的接近真正想要预测的值,所以可以通过比较当前网络的预测值和真正想要的目标值,再根据两者之间的差异情况来更新每一层AI模型的权重向量(当然,在第一次更新之前通常会有初始化的过程,即为AI模型中的各层预先配置参数),比如,如果网络的预测值高了,就调整权重向量让它预测低一些,不断的调整,直到AI模型能够预测出真正想要的目标值或与真正想要的目标值非常接近的值。因此,就需要预先定义“如何比较预测值和目标值之间的差异”,这便是损失函数(loss function)或目标函数(objective function),它们是用于衡量预测值和目标值的差异的重要方程。其中,以损失函数举例,损失函数的输出值(loss)越高表示差异越大,那么AI模型的训练就变成了尽可能缩小这个loss的过程,使得损失函数的取值小于门限,或者使得损失函数的取值满足目标需求的过程。例如,AI模型为神经网络,调整神经网络的模型参数包括调整如下参数中的至少一种:神经网络的层数、宽度、神经元的权值、或神经元的激活函数中的参数。Model training essentially involves learning certain features from training data. In training AI models (such as neural network models), the goal is to make the model's output as close as possible to the desired predicted value. This is achieved by comparing the network's current predictions with the target value and updating the weight vector of each layer based on the difference. (Of course, there's usually an initialization process before the first update, where parameters are pre-configured for each layer.) For example, if the network's prediction is too high, the weight vector is adjusted to predict a lower value. This adjustment continues until the AI model can predict the target value or a value very close to it. Therefore, it's necessary to predefine "how to compare the difference between the predicted and target values," which is the loss function or objective function. These are important equations used to measure the difference between the predicted and target values. Taking the loss function as an example, a higher output value (loss) indicates a greater difference. Therefore, training the AI model becomes a process of minimizing this loss, making the loss function value less than a threshold, or making the loss function value meet the target requirements. For example, if the AI model is a neural network, adjusting the model parameters of the neural network includes adjusting at least one of the following parameters: the number of layers of the neural network, the width, the weights of the neurons, or the parameters in the activation function of the neurons.

推理数据可以作为已完成训练的AI模型的输入,用于AI模型的推理。在模型推理过程中,将推理数据输入AI模型,可以得到对应的输出即为推理结果。Inference data can be used as input to a trained AI model for inference. During the model's inference process, the inference data is input into the AI model, and the corresponding output, which is the inference result, is obtained.

(5)AI模型的设计:(5) AI model design:

AI模型的设计主要包括数据收集环节(例如收集训练数据和/或推理数据)、模型训练环节以及模型推理环节。进一步地还可以包括推理结果应用环节。The design of an AI model mainly includes the data collection phase (e.g., collecting training data and/or inference data), the model training phase, and the model inference phase. It can also further include the application of the inference results.

不同模型的训练过程可以部署在不同的设备或节点中,也可以部署在相同的设备或节点中。不同模型的推理过程可以部署在不同的设备或节点中,也可以部署在相同的设备或节点中。以终端设备完成模型训练环节为例,终端设备可以训练配套的编码器和解码器之后,将其中解码器的模型参数发送给网络设备。以网络设备完成模型训练环节为例,网络设备在训练配套的编码器和解码器之后,可以将其中编码器的模型参数指示给终端设备。以独立的AI网元完成模型训练环节为例,AI网元可以训练配套的编码器和解码器之后,将其中编码器的模型参数发送给终端设备,将解码器的模型参数发送给网络设备。进而在终端设备中进行编码器对应的模型推理环节,以及在网络设备中进行解码器对应的模型推理环节。The training processes of different models can be deployed on different devices or nodes, or on the same device or node. Similarly, the inference processes of different models can be deployed on different devices or nodes, or on the same device or node. Taking a terminal device completing the model training phase as an example, after training its corresponding encoder and decoder, the terminal device sends the decoder's model parameters to the network device. Similarly, taking a network device completing the model training phase as an example, after training its corresponding encoder and decoder, the network device can send the encoder's model parameters to the terminal device and the decoder's model parameters to the network device. Then, the model inference phase corresponding to the encoder is performed on the terminal device, and the model inference phase corresponding to the decoder is performed on the network device.

其中,模型参数可以包括如下的一种或多种模型的结构参数(例如模型的层数、和/或权值等)、模型的输入参数(如输入维度、输入端口数)、或模型的输出参数(如输出维度、输出端口数)。可以理解,输入维度可以指的是一个输入数据的大小,例如输入数据为一个序列时,该序列对应的输入维度可以指示该序列的长度。输入端口数可以指的是输入数据的数量。类似地,输出维度可以指的是一个输出数据的大小,例如输出数据为一个序列时,该序列对应的输出维度可以指示该序列的长度。输出端口数可以指的是输出数据的数量。The model parameters can include one or more of the following: model structure parameters (e.g., number of layers, and/or weights), model input parameters (e.g., input dimension, number of input ports), or model output parameters (e.g., output dimension, number of output ports). The input dimension refers to the size of an input data set; for example, if the input data is a sequence, the corresponding input dimension indicates the length of the sequence. The number of input ports refers to the quantity of input data. Similarly, the output dimension refers to the size of an output data set; for example, if the output data is a sequence, the corresponding output dimension indicates the length of the sequence. The number of output ports refers to the quantity of output data.

(6)信道信息:(6) Channel information:

在通信系统(例如,LTE通信系统或NR通信系统等)中,网络设备基于信道信息决定调度终端设备的下行数据信道的资源、MCS以及预编码等配置中的一项或多项。可以理解,信道信息也可以被称为信道状态信息(channel state information,CSI)或信道环境信息,是一种能够反映信道特征、信道质量的信息。In communication systems (e.g., LTE or NR systems), network devices determine one or more configurations, such as downlink data channel resources, MCS (Multi-Channel System), and precoding, for scheduling terminal devices based on channel information. Channel information, also known as channel state information (CSI) or channel environment information, is information that reflects channel characteristics and quality.

信道信息测量指的是接收端根据发送端发送的参考信号求解信道信息,即利用信道估计方法估计出信道信息。示例性地,参考信号可以包括信道状态信息参考信号(channel state information reference signal,CSI-RS)、同步信号/广播信道块(synchronizing signal/physical broadcast channel block,SSB)、信道探测参考信号(sounding reference signal,SRS)或解调参考信号(demodulation reference signal,DMRS)等中的一项或多项。CSI-RS、SSB以及DMRS等中的一项或多项可以用于测量下行信道信息。SRS和/或DMRS可以用于测量上行信道信息。Channel information measurement refers to the process by which the receiver deciphers the channel information based on a reference signal transmitted by the transmitter; that is, it involves estimating the channel information using channel estimation methods. For example, the reference signal may include one or more of the following: channel state information reference signal (CSI-RS), synchronizing signal/physical broadcast channel block (SSB), sounding reference signal (SRS), or demodulation reference signal (DMRS). One or more of CSI-RS, SSB, and DMRS can be used to measure downlink channel information. SRS and/or DMRS can be used to measure uplink channel information.

信道信息可以基于参考信号的信道测量结果确定。或者,信道信息可以为参考信号的信道测量结果。在本申请实施例中,参考信号的信道测量结果或信道测量结果也可以替换为信道信息。Channel information can be determined based on channel measurement results of a reference signal. Alternatively, channel information can be the channel measurement results of a reference signal. In the embodiments of this application, the channel measurement results of the reference signal or the channel measurement results can also be replaced by channel information.

以FDD通信场景为例,在FDD通信场景中,由于上下行信道不具备互易性或者说无法保证上下行信道的互易性,网络设备需要通过终端设备进行上行反馈的方式获得下行CSI。网络设备通常会向终端设备下行参考信号,终端设备接收该下行参考信号。由于终端设备已知下行参考信号的发送信息,终端设备可以根据接收到的下行参考信号进行信道测量、干扰测量估计(测量)出该下行参考信号所经历的下行信道。终端设备基于该测量得到下行信道矩阵生成下行CSI。终端设备根据协议预定义的方式或网络设备配置的方式生成CSI报告,并反馈给网络设备,以使其获取下行CSI。Taking FDD communication as an example, in FDD communication, because uplink and downlink channels lack reciprocity or cannot guarantee reciprocity, network devices need to obtain downlink CSI through uplink feedback from terminal devices. Network devices typically send a downlink reference signal to the terminal device, which receives this signal. Since the terminal device knows the transmission information of the downlink reference signal, it can perform channel measurement and interference measurement based on the received signal to estimate the downlink channel traversed by the downlink reference signal. The terminal device then generates the downlink CSI based on this measurement and the resulting downlink channel matrix. The terminal device generates a CSI report according to a predefined protocol method or a network device configuration method and feeds it back to the network device so that it can obtain the downlink CSI.

在本申请中,CSI的含义相较于传统方案中的CSI的含义更广,并不局限于信道质量指示(channel quality indication,CQI)、预编码矩阵指示(precoding matrix indicator,PMI)、秩指示(rank indicator,RI)、或,CSI-RS资源指示(CSI-RS resource indicator,CRI),其还可以为信道响应信息(如信道响应矩阵,频域信道响应信息,时域信道响应信息),信道响应对应的权值信息,参考信号接收功率(reference signal receiving power,RSRP)或信号与干扰加噪声比(signal to interference plus noise ratio,SINR)等中的一种或多种。In this application, the meaning of CSI is broader than that of CSI in traditional schemes. It is not limited to channel quality indicator (CQI), precoding matrix indicator (PMI), rank indicator (RI), or CSI-RS resource indicator (CRI). It can also be one or more of the following: channel response information (such as channel response matrix, frequency domain channel response information, time domain channel response information), weight information corresponding to the channel response, reference signal receiving power (RSRP), or signal to interference plus noise ratio (SINR).

其中,RI用于指示参考信号的接收端,如终端设备,建议的下行传输的层数,CQI用于指示参考信号的接收端,如终端设备,判断的当前信道条件所能支持的调制编码方式,PMI用于指示参考信号的接收端,如终端设备,建议的预编码。PMI所指示的预编码的层数与RI对应。In this context, RI indicates the recommended number of downlink transmission layers for the receiving end of the reference signal, such as a terminal device; CQI indicates the modulation and coding schemes supported by the current channel conditions for the receiving end of the reference signal, such as a terminal device; and PMI indicates the recommended precoding for the receiving end of the reference signal, such as a terminal device. The number of precoding layers indicated by PMI corresponds to RI.

如前所述,对参考信号进行测量可以得到信道信息。对该信道信息进行压缩和/或量化操作可以得到反馈信息。反馈信息可以通过信道信息报告上报。因而,反馈信息也可以称为信道报告。本申请实施例中,一个信道报告可以包括至少一个子信道报告。As mentioned earlier, channel information can be obtained by measuring the reference signal. Compressing and/or quantizing this channel information yields feedback information. This feedback information can be reported via a channel information report. Therefore, feedback information can also be referred to as a channel report. In the embodiments of this application, a channel report may include at least one sub-channel report.

对该反馈信息进行解压缩和/或反量化操作可以恢复出信道信息。其中解压缩也可以理解为恢复或者重构等,下文不再赘述。The channel information can be recovered by decompressing and/or inverting the feedback information. Decompression can also be understood as recovery or reconstruction, which will not be elaborated further below.

反馈信息也可以称为信道信息的反馈信息、CSI的反馈信息、CSI反馈信息、压缩信息、信道信息的压缩信息、CSI的压缩信息、压缩的信道信息或压缩的CSI等。Feedback information can also be called channel information feedback information, CSI feedback information, CSI feedback information, compressed information, compressed channel information, compressed CSI information, compressed channel information, or compressed CSI, etc.

恢复的信道信息也可以称为CSI恢复信息。The recovered channel information can also be called CSI recovery information.

随着MIMO系统天线阵列规模不断增大,可支持的天线端口数增多,对应的信道矩阵与预编码矩阵的维度增长。为使得终端设备能够对下行信道进行估计(测量),网络设备下发参考信号的开销增加。同时,用有限的预定义码字近似表示大规模信道矩阵和预编码矩阵的误差会增大。一种提高信道恢复精度的方法是增加码本中码字的数量,但这会同时导致CSI反馈(包括码字相应的编号以及加权系数中的一个或多个)的开销增大,进而降低数据传输的可用资源,造成系统容量损失。As the antenna array size of MIMO systems continues to increase, the number of supported antenna ports also increases, leading to a growth in the dimensionality of the corresponding channel matrix and precoding matrix. To enable terminal devices to estimate (measure) the downlink channel, the overhead of network devices transmitting reference signals increases. Simultaneously, the error in approximating large-scale channel and precoding matrices using a finite number of predefined codewords increases. One method to improve channel recovery accuracy is to increase the number of codewords in the codebook; however, this simultaneously increases the overhead of CSI feedback (including the corresponding codeword number and one or more weighting coefficients), thereby reducing available resources for data transmission and causing system capacity loss.

将AI技术引入无线通信网络中,产生了一种基于AI模型的CSI反馈方式,即AI-CSI反馈。终端设备利用AI模型对CSI进行压缩反馈,网络设备利用AI模型对压缩的CSI进行恢复。在基于AI的CSI反馈中传输的是一个序列(如比特序列),开销相较于传统CSI反馈的开销低。而且,AI模型具有更强的非线性特征提取能力,相较于传统方案可以更有效地对信道信息进行压缩表示,以及根据反馈信息对信道进行更有效地恢复。Introducing AI technology into wireless communication networks has led to a CSI feedback method based on AI models, known as AI-CSI feedback. Terminal devices use AI models to compress and feed back CSI, while network devices use AI models to reconstruct the compressed CSI. AI-based CSI feedback transmits a sequence (such as a bit sequence), resulting in lower overhead compared to traditional CSI feedback. Furthermore, AI models possess stronger nonlinear feature extraction capabilities, enabling more effective compression and representation of channel information and more efficient channel reconstruction based on feedback information compared to traditional methods.

CSI反馈可以基于AE的AI模型实现,以图5为例,图5中的编码器可以为CSI生成器,解码器可以为CSI重构器。例如,编码器可以部署于终端设备中,解码器可以部署于网络设备中。CSI feedback can be implemented based on AE's AI model. Taking Figure 5 as an example, the encoder in Figure 5 can be a CSI generator, and the decoder can be a CSI reconstructor. For example, the encoder can be deployed in the terminal device, and the decoder can be deployed in the network device.

信道信息V通过编码器生成CSI反馈信息z。通过解码器重构信道信息,即得到恢复的信道信息V’。The channel information V is used by the encoder to generate CSI feedback information z. The channel information is then reconstructed by the decoder to obtain the recovered channel information V’.

信道信息V可以是通过信道信息测量得到的。以信道信息V可以包括下行信道的特征向量矩阵(由特征向量构成的矩阵)为例。编码器对下行信道的特征向量矩阵进行处理,以得到CSI反馈信息z。换言之,将相关方案中根据码本对特征矩阵进行压缩和/或量化操作替换为由编码器对特征矩阵进行处理的操作,以得到CSI反馈信息z。通过解码器对CSI反馈信息z进行处理可以得到恢复的信道信息V’。Channel information V can be obtained through channel information measurement. For example, channel information V may include the eigenvector matrix (a matrix composed of eigenvectors) of the downlink channel. The encoder processes the eigenvector matrix of the downlink channel to obtain CSI feedback information z. In other words, the compression and/or quantization operations on the eigenvector matrix based on the codebook in the correlation scheme are replaced by the operation of the encoder processing the eigenvector matrix to obtain CSI feedback information z. The decoder processes the CSI feedback information z to obtain the recovered channel information V'.

下面进一步对本申请实施例中的AI模型的训练过程以及推理过程进行示例性说明。The training and inference processes of the AI model in the embodiments of this application will be further illustrated below.

用于训练AI模型的训练数据包括训练样本和样本标签。示例性地,训练样本为终端设备测量的信道信息,样本标签为真实的信道信息,即真值CSI。对于编码器和解码器属于同一自编码器的情况,训练数据可以仅包括训练样本,或者说训练样本就是样本标签。Training data used to train an AI model includes training samples and sample labels. For example, training samples are channel information measured by the terminal device, and sample labels are the actual channel information, i.e., the ground truth CSI. If the encoder and decoder belong to the same autoencoder, the training data may only include training samples, or in other words, the training samples are the sample labels.

在无线通信领域,真值CSI可以为高精度的CSI。In the field of wireless communication, the true CSI can be a high-precision CSI.

具体训练过程如下:模型训练节点使用编码器处理信道信息,即训练样本,以得到CSI反馈信息,并使用解码器处理反馈信息,得到恢复的信道信息,即CSI恢复信息。进而计算CSI恢复信息与对应的样本标签之间的差异,即损失函数的取值,根据损失函数的取值更新编码器和解码器的参数,使得恢复的信道信息与对应的样本标签之间的差异最小化,即最小化损失函数。示例性地,损失函数可以是最小均方误差(mean square error,MSE)或者余弦相似度。重复上述操作,即可得到满足目标需求的编码器和解码器。上述模型训练节点可以是终端设备、网络设备或者通信系统中其他具备AI功能的网元。AI模型的实现可以是硬件电路、软件,也可以是软件和硬件结合的方式。软件的非限制性示例包括:程序代码、程序、子程序、指令、指令集、代码、代码段、软件模块、应用程序或软件应用程序等。The specific training process is as follows: The model training node uses an encoder to process channel information, i.e., training samples, to obtain CSI feedback information, and uses a decoder to process the feedback information to obtain the recovered channel information, i.e., the CSI recovered information. Then, the difference between the CSI recovered information and the corresponding sample label is calculated, i.e., the value of the loss function. The parameters of the encoder and decoder are updated according to the value of the loss function to minimize the difference between the recovered channel information and the corresponding sample label, i.e., minimizing the loss function. For example, the loss function can be the minimum mean square error (MSE) or cosine similarity. Repeating the above operations yields an encoder and decoder that meet the target requirements. The above model training node can be a terminal device, network device, or other network element with AI functionality in a communication system. The AI model can be implemented in hardware circuits, software, or a combination of both. Non-limiting examples of software include: program code, program, subroutine, instruction, instruction set, code, code segment, software module, application program, or software application, etc.

当前,网络设备向终端设备发送的参考信号配置灵活可变,可能导致AI模型在对信道信息处理的过程中出现性能波动。例如,以终端设备侧的AI模型为AI CSI压缩模型为例,可能导致CSI反馈的性能波动,又例如,以终端设备侧的AI模型为AI CSI预测模型为例,可能导致CSI预测的性能波动。Currently, the configuration of reference signals sent by network devices to terminal devices is flexible and variable, which may cause performance fluctuations in AI models during channel information processing. For example, if the AI model on the terminal device side is an AI CSI compression model, it may lead to performance fluctuations in CSI feedback. Similarly, if the AI model on the terminal device side is an AI CSI prediction model, it may lead to performance fluctuations in CSI prediction.

基于此,本申请旨在提供一种通信方法,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能。Based on this, this application aims to provide a communication method that can guarantee or improve the performance of AI models when network devices perform different resource configurations to terminal devices.

下面结合附图对本申请实施例提供的通信方法进行详细介绍,需要提前说明的是,图6至图11是以第一装置为终端设备,第二装置为网络设备为例进行说明的。为便于理解,在图6至图11的相关内容及描述中,以终端设备代替第一装置、网络设备代替第二装置进行描述。The communication method provided in the embodiments of this application will be described in detail below with reference to the accompanying drawings. It should be noted in advance that Figures 6 to 11 are illustrated using the example of the first device as a terminal device and the second device as a network device. For ease of understanding, in the relevant content and description of Figures 6 to 11, the terminal device is used to replace the first device and the network device is used to replace the second device.

图6为本申请实施例提供的一种通信方法600的示意性流程图,如图6所示,该方法至少包括以下步骤。Figure 6 is a schematic flowchart of a communication method 600 provided in an embodiment of this application. As shown in Figure 6, the method includes at least the following steps.

S610,网络设备确定第一资源配置。S610, network devices determine the first resource allocation.

S620,网络设备向终端设备发送第一指示信息,相应地,终端设备接收该第一指示信息。S620, the network device sends a first instruction message to the terminal device, and the terminal device receives the first instruction message accordingly.

具体的,在步骤S610和步骤S620中,该第一指示信息指示第一资源配置,该第一资源配置为第一AI模型支持的至少一个资源配置中的一个。终端设备在接收到该第一资源配置后,基于第一资源配置测量得到信道信息,并根据第一AI模型对该信道信息进行处理。Specifically, in steps S610 and S620, the first indication information indicates a first resource configuration, which is one of at least one resource configuration supported by the first AI model. After receiving the first resource configuration, the terminal device measures channel information based on the first resource configuration and processes the channel information according to the first AI model.

示例性的,在第一AI模型为AI CSI压缩模型的情况下,根据第一AI模型对信道信息进行处理,包括:根据第一AI模型对信道信息进行压缩处理。For example, when the first AI model is an AI CSI compression model, processing the channel information according to the first AI model includes: compressing the channel information according to the first AI model.

示例性的,在第一AI模型为AI CSI预测后压缩模型的情况下,根据第一AI模型对信道信息进行处理,包括:根据第一AI模型对信道信息进行预测处理,随后对预测处理得到的信道信息进一步进行压缩处理。其中,AI CSI预测后压缩模型也可以称为AI CSI预测加压缩模型或者AI CSI预测+压缩模型,应理解,本申请对此不作限制。For example, when the first AI model is an AI CSI prediction followed by compression model, processing the channel information according to the first AI model includes: performing prediction processing on the channel information according to the first AI model, and then further compressing the channel information obtained from the prediction processing. The AI CSI prediction followed by compression model can also be called an AI CSI prediction plus compression model or an AI CSI prediction + compression model; it should be understood that this application does not impose any limitations on this.

可选地,在上述两种示例的情况下,该方法还可以包括:终端设备确定第一信道报告。可以理解为,第一信道报告是终端设备基于第一AI模型对第一资源配置测量得到的信道信息进行压缩处理后确定的。Optionally, in the two examples described above, the method may further include: the terminal device determining a first channel report. This can be understood as the first channel report being determined by the terminal device after compressing the channel information obtained from the first resource configuration measurement based on the first AI model.

示例性的,在第一AI模型为AI CSI预测或AI波束时域预测模型的情况下,根据第一AI模型对信道信息进行处理,包括:根据第一AI模型对信道信息进行预测处理。可选地,在这种情况下,该方法还可以包括:终端设备根据非AI模型对预测处理得到的信道信息进行压缩处理。For example, when the first AI model is an AI CSI prediction or an AI beamforming time-domain prediction model, processing the channel information according to the first AI model includes: performing prediction processing on the channel information according to the first AI model. Optionally, in this case, the method may further include: the terminal device performing compression processing on the channel information obtained from the prediction processing according to a non-AI model.

在本申请实施例中,资源配置包括配置类型、相邻资源的偏移或发送次数,其中配置类型包括以下至少一项:周期配置、半静态配置或者非周期配置。例如,资源配置可以是CSI-RS配置,也可以是其他参考信号的配置。In this embodiment, resource configuration includes configuration type, offset of adjacent resources, or number of transmissions, wherein the configuration type includes at least one of the following: periodic configuration, semi-static configuration, or aperiodic configuration. For example, resource configuration can be CSI-RS configuration or configuration of other reference signals.

示例性的,在资源配置为周期配置CSI-RS的情况下,网络设备配置CSI-RS资源的发送周期(例如每N个时隙(slot))和偏移(周期内的时隙偏移)并告知终端设备。其中,CSI-RS资源的发送周期可以理解为相邻CSI-RS资源的偏移。For example, when CSI-RS is configured for periodic configuration, the network device configures the transmission period (e.g., every N slots) and offset (slot offset within the period) of the CSI-RS resources and informs the terminal device. The transmission period of the CSI-RS resources can be understood as the offset between adjacent CSI-RS resources.

示例性的,在资源配置为半静态配置CSI-RS的情况下,网络设备配置CSI-RS资源的发送周期(例如每隔N个slot)和偏移(周期内的时隙偏移)并告知终端设备。其中,网络设备可以通过媒体接入控制-控制元素(medium access control-control element,MAC-CE)等配置信息激活或者去激活CSI-RS资源的发送。For example, when CSI-RS is configured in a semi-static manner, the network device configures the transmission period (e.g., every N slots) and offset (time slot offset within the period) of the CSI-RS resource and informs the terminal device. The network device can activate or deactivate the transmission of the CSI-RS resource using configuration information such as the medium access control-control element (MAC-CE).

示例性的,在资源配置为非周期配置CSI-RS的情况下,网络设备通过下行控制信息(downlink control information,DCI)信令通知终端设备每次CSI-RS资源的发送。并且,在一种情况下,非周期配置CSI-RS也支持一次配置一个或多个CSI-RS资源的发送,例如,网络设备通过一组参数[m,K]进行配置,其中,m为非周期配置CSI-RS资源的发送周期(例如每m个时隙(slot)),K为CSI-RS资源的发送次数。其中,CSI-RS资源的发送周期可以理解为相邻CSI-RS资源的偏移,CSI-RS资源的发送次数可以理解为CSI-RS资源的个数,下文不再赘述。For example, when CSI-RS is configured for aperiodic configuration, the network device notifies the terminal device of each CSI-RS resource transmission via downlink control information (DCI) signaling. Furthermore, in one scenario, aperiodic CSI-RS configuration also supports configuring the transmission of one or more CSI-RS resources at a time. For instance, the network device configures this using a set of parameters [m, K], where m is the transmission period of the aperiodic CSI-RS resources (e.g., every m slots), and K is the number of CSI-RS resource transmissions. The transmission period of the CSI-RS resources can be understood as the offset between adjacent CSI-RS resources, and the number of CSI-RS resource transmissions can be understood as the number of CSI-RS resources; these details will not be elaborated further below.

需要注意的是,信道报告的反馈也支持上述三种配置类型。It should be noted that the feedback from the channel report also supports the above three configuration types.

还需说明的是,在步骤S610中,网络设备确定第一资源配置,有如下几种可能的实现方式。It should also be noted that in step S610, the network device determines the first resource configuration in several possible ways.

方式一:Method 1:

可选地,在一种可能的实现方式中,网络设备通过协议预定义获得第一AI模型支持的至少一个资源配置,随后,网络设备从至少一个资源配置中选择一个第一资源配置。Alternatively, in one possible implementation, the network device obtains at least one resource configuration supported by the first AI model through protocol predefinition, and then the network device selects a first resource configuration from the at least one resource configuration.

或者,在一种可能的实现方式中,网络设备通过协议预定义不仅获得第一AI模型支持的至少一个资源配置,还通过协议预定义获得第二AI模型支持的至少一个资源配置。随后,网络设备从获得的所有资源配置中选择一个第一资源配置。应理解,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。Alternatively, in one possible implementation, the network device obtains not only at least one resource configuration supported by a first AI model through protocol predefinition, but also at least one resource configuration supported by a second AI model through protocol predefinition. Subsequently, the network device selects a first resource configuration from all the obtained resource configurations. It should be understood that the second AI model can be understood as one or more AI models different from the first AI model.

方式二:Method 2:

可选地,在一种可能的实现方式中,在步骤S610之前,该方法还可以包括:终端设备向网络设备发送第一信息,相应地,网络设备接收第一信息,其中,第一信息关联至少一个资源配置。Optionally, in one possible implementation, before step S610, the method may further include: the terminal device sending first information to the network device, and correspondingly, the network device receiving the first information, wherein the first information is associated with at least one resource configuration.

示例性的,在一种可能的实现方式中,该第一信息包括第一AI模型支持的至少一个资源配置。For example, in one possible implementation, the first information includes at least one resource configuration supported by the first AI model.

具体的,终端设备向网络设备发送第一信息,该第一信息包括终端设备侧的第一AI模型支持的至少一个资源配置,即是说,终端设备向网络设备上报第一AI模型支持的至少一个资源配置。Specifically, the terminal device sends first information to the network device, which includes at least one resource configuration supported by the first AI model on the terminal device side. That is to say, the terminal device reports at least one resource configuration supported by the first AI model to the network device.

例如,在终端设备侧仅有一个AI模型(例如AI模型#1)的情况下,终端设备上报AI模型#1支持的至少一个资源配置。比如,如表1所示,AI模型#1支持以下资源配置。For example, if there is only one AI model (e.g., AI model #1) on the terminal device side, the terminal device reports at least one resource configuration supported by AI model #1. For example, as shown in Table 1, AI model #1 supports the following resource configurations.

表1
Table 1

从表1可以看出,终端设备侧的AI模型#1支持4个资源配置(资源配置#1-资源配置#4),其中,资源配置#1的配置类型为周期配置CSI-RS,资源配置#2的配置类型为半静态配置CSI-RS,资源配置#3和资源配置#4的配置类型为非周期配置CSI-RS。其中,资源配置#3对应的非周期配置CSI-RS的相邻资源的偏移为m1,CSI-RS资源的发送次数为K1。资源配置#4对应的非周期配置CSI-RS的相邻资源的偏移为m2,CSI-RS资源的发送次数为K2。As shown in Table 1, the AI model #1 on the terminal device side supports four resource configurations (Resource Configuration #1 to Resource Configuration #4). Resource Configuration #1 is a periodic CSI-RS configuration, Resource Configuration #2 is a semi-static CSI-RS configuration, and Resource Configurations #3 and #4 are aperiodic CSI-RS configurations. Specifically, the offset of the adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #3 is m1, and the number of CSI-RS transmissions is K1. The offset of the adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #4 is m2, and the number of CSI-RS transmissions is K2.

网络设备在接收到上述4个资源配置后,从该4个资源配置中选择一个资源配置(即第一资源配置)发送给终端设备。After receiving the above four resource configurations, the network device selects one resource configuration (i.e., the first resource configuration) from the four resource configurations and sends it to the terminal device.

示例性的,在一种可能的实现方式中,第一信息还包括第二AI模型支持的至少一个资源配置。其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。For example, in one possible implementation, the first information also includes at least one resource configuration supported by the second AI model. The second AI model can be understood as one or more AI models different from the first AI model.

例如,在终端设备侧有多个AI模型的情况下,终端设备也可以上报所有AI模型支持的至少一个资源配置。需要说明的是,在这种情况下,终端设备是分多次上报多个AI模型的,假设以终端设备侧有3个AI模型为例,示例的,在一种情况下,终端设备分三次上报AI模型支持的资源配置,且终端设备上报每个AI模型对应的资源配置的时刻不同,比如,终端设备上报AI模型#1支持的资源配置的时刻为第一时刻,终端设备上报AI模型#2支持的资源配置的时刻为第二时刻,终端设备上报AI模型#3支持的资源配置的时刻为第三时刻,应理解,第一时刻、第二时刻与第三时刻是不同的时刻。那么,网络设备可以根据在不同时刻接收的资源配置来确定不同的AI模型、以及不同的AI模型支持的资源配置。比如,网络设备根据在第一时刻接收到的资源配置确定其为AI模型#1支持的资源配置。For example, when there are multiple AI models on the terminal device side, the terminal device can also report at least one resource configuration supported by all AI models. It should be noted that in this case, the terminal device reports multiple AI models in multiple installments. Assuming there are three AI models on the terminal device side, in one scenario, the terminal device reports the resource configurations supported by each AI model in three separate reports, with each report occurring at a different time. For instance, the terminal device reports the resource configuration supported by AI model #1 at the first time, AI model #2 at the second time, and AI model #3 at the third time. It should be understood that the first, second, and third times are different. Therefore, the network device can determine the different AI models and the resource configurations supported by each AI model based on the resource configurations received at different times. For example, the network device might determine that the resource configuration received at the first time is supported by AI model #1.

示例性的,在另一种情况下,终端设备上报每个AI模型对应的资源配置的指示信息不同,或者说,AI模型可以通过不同的指示信息上报不同的AI模型对应的资源配置。比如,终端设备通过指示信息#A上报AI模型#1支持的资源配置,终端设备通过指示信息#B上报AI模型#2支持的资源配置,终端设备通过指示信息#C上报AI模型#3支持的资源配置,应理解,指示信息#A、指示信息#B、指示信息#C时不同的信息。那么,网络设备可以根据接收到的不同的指示信息来确定不同的AI模型、以及不同的AI模型支持的资源配置。比如,网络设备根据通过指示信息#A接收到的资源配置确定其为AI模型#1支持的资源配置。For example, in another scenario, the terminal device reports different resource configuration indications for each AI model; in other words, the AI models can report different resource configurations corresponding to different AI models through different indications. For instance, the terminal device reports the resource configuration supported by AI model #1 through indication #A, the terminal device reports the resource configuration supported by AI model #2 through indication #B, and the terminal device reports the resource configuration supported by AI model #3 through indication #C. It should be understood that indications #A, #B, and #C are different. Then, the network device can determine different AI models and the resource configurations supported by different AI models based on the different indications received. For example, the network device can determine that the resource configuration received through indication #A is supported by AI model #1.

应理解,上述示例仅为举例说明,终端设备分多次上报每个AI模型的资源配置也可以和其他信息(除上述指示信息外的其他信息)相关联,本申请对此不作限制。It should be understood that the above examples are merely illustrative. The resource configuration of each AI model reported by the terminal device in multiple reports can also be associated with other information (other than the above-mentioned indication information), and this application does not impose any restrictions on this.

还应理解,上述时刻也可以替换为其他时间信息,本申请对此不作限制。It should also be understood that the above time can be replaced with other time information, and this application does not impose any restrictions on this.

参见表2,AI模型#1至AI模型#3分别支持以下资源配置。See Table 2. AI Models #1 to #3 support the following resource configurations respectively.

表2
Table 2

从表2可以看出,终端设备侧的AI模型#1支持4个资源配置(资源配置#1-资源配置#4),AI模型#2支持3个资源配置(资源配置#1-资源配置#3),AI模型#3也支持3个资源配置(资源配置#1-资源配置#3)。其中,关于AI模型#1的资源配置的相关描述可参考前文所述,这里不予赘述。AI模型#2对应的非周期配置CSI-RS的相邻资源的偏移和CSI-RS资源的发送次数不作限制。AI模型#3对应的非周期配置CSI-RS的相邻资源的偏移为m3,和CSI-RS资源的发送次数为K3。As shown in Table 2, AI model #1 on the terminal device side supports four resource configurations (resource configuration #1-resource configuration #4), AI model #2 supports three resource configurations (resource configuration #1-resource configuration #3), and AI model #3 also supports three resource configurations (resource configuration #1-resource configuration #3). The relevant descriptions of the resource configurations for AI model #1 can be found above and will not be repeated here. There are no restrictions on the offset of adjacent resources and the number of times CSI-RS resources are sent for the aperiodic configuration corresponding to AI model #2. For AI model #3, the offset of adjacent resources for the aperiodic configuration CSI-RS is m3, and the number of times CSI-RS resources are sent is K3.

进一步地,网络设备在接收到终端设备上报的资源配置后,从资源配置中选择一个资源配置(即第一资源配置)发送给终端设备。Furthermore, after receiving the resource configuration reported by the terminal device, the network device selects a resource configuration (i.e., the first resource configuration) from the resource configurations and sends it to the terminal device.

需要注意的是,前文所述AI模型#1、AI模型#2、AI模型#3仅仅是为了说明这是3个不同的AI模型,并不关联3个AI模型的标识信息。It should be noted that the AI models #1, #2, and #3 mentioned above are only for the purpose of illustrating that these are three different AI models and are not related to the identification information of the three AI models.

示例性的,在另一种可能的实现方式中,该第一信息包括第一AI模型的标识信息。其中,第一AI模型的标识信息用于标识第一AI模型,例如,第一AI模型的标识信息可以是数字序列标识,或者也可以是字母序列标识,或者第一AI模型的标识信息也可以是第一AI模型的数据集标识,其中第一AI模型的数据集标识用于指示第一AI模型是根据某个数据集训练得到的。举例来说,某个装置中有多个数据集,该装置根据该多个数据集可训练得到多个AI模型,数据集标识可指示得到某一AI模型所使用的数据集。作为示例,不同的数据集可以通过不同的标识进行标识。For example, in another possible implementation, the first information includes identification information of the first AI model. This identification information identifies the first AI model; for instance, it can be a number sequence identifier, an letter sequence identifier, or a dataset identifier, indicating that the first AI model was trained on a specific dataset. For example, a device may have multiple datasets, and the device can train multiple AI models based on these datasets. The dataset identifier can indicate the dataset used by a particular AI model. As an example, different datasets can be identified using different identifiers.

需要说明的是,第一AI模型的标识信息可以是随机生成的,或者也可以预先定义的,应理解,本申请对此不作限制。It should be noted that the identification information of the first AI model can be randomly generated or predefined, and it should be understood that this application does not impose any restrictions on this.

应理解,上述第一AI模型的标识信息仅为示例,本申请对此不作限制。It should be understood that the identification information of the first AI model mentioned above is merely an example, and this application does not impose any limitations on it.

具体的,在一种可能的实现方式中,网络设备通过协议预定义预先获得第一AI模型的标识信息(例如标识1)和至少一个资源配置的对应关系,如表3所示。在本申请实施例中,终端设备向网络设备发送第一AI模型的标识信息(例如标识1),网络设备根据接收到的标识1以及协议预定义获得的第一AI模型的标识信息和至少一个资源配置的对应关系,确定标识1对应的第一AI模型支持的至少一个资源配置,并从至少一个资源配置中选择一个资源配置(例如第一资源配置)发送给终端设备。其中,对应关系也可以称为关联关系等,这里不作限制。Specifically, in one possible implementation, the network device pre-defines the correspondence between the identification information of the first AI model (e.g., identifier 1) and at least one resource configuration through a protocol predefined structure, as shown in Table 3. In this embodiment, the terminal device sends the identification information of the first AI model (e.g., identifier 1) to the network device. Based on the received identifier 1 and the correspondence between the identification information of the first AI model and at least one resource configuration obtained through the protocol predefined structure, the network device determines at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device. The correspondence can also be referred to as an association relationship, etc., and is not limited here.

表3
Table 3

从表3可以看出,标识1对应的AI模型支持4个资源配置(资源配置#1-资源配置#4),其中,资源配置#1的配置类型为周期配置CSI-RS,资源配置#2的配置类型为半静态配置CSI-RS,资源配置#3和资源配置#4的配置类型为非周期配置CSI-RS。其中,资源配置#3对应的非周期配置CSI-RS的相邻资源的偏移为m4,CSI-RS资源的发送次数为K4。资源配置#4对应的非周期配置CSI-RS的相邻资源的偏移为m5,CSI-RS资源的发送次数为K5。As shown in Table 3, the AI model corresponding to identifier 1 supports four resource configurations (Resource Configuration #1 to Resource Configuration #4). Resource Configuration #1 is a periodic CSI-RS configuration, Resource Configuration #2 is a semi-static CSI-RS configuration, and Resource Configurations #3 and #4 are aperiodic CSI-RS configurations. Specifically, the offset of adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #3 is m4, and the number of CSI-RS transmissions is K4. The offset of adjacent resources for the aperiodic CSI-RS configuration corresponding to Resource Configuration #4 is m5, and the number of CSI-RS transmissions is K5.

需要说明的是,在另一种可能的实现方式中,网络设备还可以通过协议预定义预先获得终端设备侧的其他AI模型的标识信息和至少一个资源配置的对应关系。例如,网络设备也可以通过协议预定义获得第二AI模型的标识信息和第二AI模型支持的至少一个资源配置的对应关系。其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。It should be noted that, in another possible implementation, the network device can also pre-define the identification information of other AI models on the terminal device side and the correspondence between them and at least one resource configuration through protocol pre-definition. For example, the network device can also pre-define the identification information of a second AI model and the correspondence between it and at least one resource configuration supported by the second AI model. Here, the second AI model can be understood as one or more AI models different from the first AI model.

其中,第二AI模型的标识信息用于标识第二AI模型,例如,第二AI模型的标识信息可以是数字序列标识,或者也可以是字母序列标识,或者第二AI模型的标识信息也可以是第二AI模型的数据集标识,其中第二AI模型的数据集标识用于指示第二AI模型是根据某个数据集训练得到的。举例来说,某个装置中有多个数据集,该装置根据该多个数据集可训练得到多个AI模型,数据集标识可指示得到某一AI模型所使用的数据集。作为示例,不同的数据集可以通过不同的标识进行标识。The identification information of the second AI model is used to identify the second AI model. For example, the identification information of the second AI model can be a number sequence identifier, an letter sequence identifier, or a dataset identifier, where the dataset identifier indicates that the second AI model was trained on a specific dataset. For instance, a device may have multiple datasets, and the device can train multiple AI models based on these datasets. The dataset identifier indicates the dataset used by a particular AI model. As an example, different datasets can be identified using different identifiers.

需要说明的是,第二AI模型的标识信息可以是随机生成的,或者也可以预先定义的,应理解,本申请对此不作限制。It should be noted that the identification information of the second AI model can be randomly generated or predefined, and it should be understood that this application does not impose any restrictions on this.

应理解,上述第二AI模型的标识信息仅为示例,本申请对此不作限制。It should be understood that the identification information of the second AI model mentioned above is merely an example, and this application does not impose any limitations on it.

如表4所示,假设以终端设备侧有3个AI模型为例。As shown in Table 4, let's take the example of having 3 AI models on the terminal device side.

表4
Table 4

从表4可以看出,标识1对应的AI模型支持4个资源配置(资源配置#1至资源配置#4),标识2对应的AI模型支持3个资源配置(资源配置#1至资源配置#3),标识3对应的AI模型支持3个资源配置(资源配置#1至资源配置#3)。关于标识1对应的AI模型支持的资源配置的相关描述可参考前文所述,这里不予赘述。标识2对应的AI模型支持的非周期配置CSI-RS的相邻资源的偏移和CSI-RS资源的发送次数不作限制。标识3对应的AI模型支持的非周期配置CSI-RS的相邻资源的偏移为m6,和CSI-RS资源的发送次数为K6。As shown in Table 4, the AI model corresponding to identifier 1 supports 4 resource configurations (resource configuration #1 to resource configuration #4), the AI model corresponding to identifier 2 supports 3 resource configurations (resource configuration #1 to resource configuration #3), and the AI model corresponding to identifier 3 supports 3 resource configurations (resource configuration #1 to resource configuration #3). The description of the resource configurations supported by the AI model corresponding to identifier 1 can be found above and will not be repeated here. The AI model corresponding to identifier 2 supports no restrictions on the offset of adjacent resources in the aperiodic CSI-RS configuration and the number of CSI-RS resource transmissions. The AI model corresponding to identifier 3 supports an offset of m6 for adjacent resources in the aperiodic CSI-RS configuration and a transmission count of K6 for the CSI-RS resource.

进一步地,终端设备向网络设备发送第一AI模型的标识信息(例如标识1),网络设备根据接收到的标识1以及协议预定义获得的终端设备侧的所有AI模型的标识信息和至少一个资源配置的对应关系,确定标识1对应的第一AI模型支持的至少一个资源配置,并从至少一个资源配置中选择一个资源配置(例如第一资源配置)发送给终端设备。Furthermore, the terminal device sends the identification information (e.g., identifier 1) of the first AI model to the network device. Based on the received identifier 1 and the correspondence between the identification information of all AI models on the terminal device side obtained by the protocol predefined and at least one resource configuration, the network device determines at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.

方式三:Method 3:

在第一信息包括第一AI模型的标识信息的情况下,即是说,在终端设备向网络设备发送第一AI模型的标识信息的情况下,可选地,在一种可能的实现方式中,该方法还可以包括:终端设备向网络设备发送第二指示信息,相应地,网络设备接收该第二指示信息。When the first information includes the identification information of the first AI model, that is, when the terminal device sends the identification information of the first AI model to the network device, optionally, in one possible implementation, the method may further include: the terminal device sending second indication information to the network device, and the network device receiving the second indication information accordingly.

可选地,在一种可能的实现方式中,该第二指示信息指示第一AI模型的标识信息(例如标识1)和至少一个资源配置的对应关系,如表3所示。网络设备在接收到第二指示信息和第一AI模型的标识信息(例如标识1)后,进一步地,根据终端设备上报的第一AI模型的标识信息(即标识1)以及第二指示信息,确定标识1对应的第一AI模型支持的至少一个资源配置,并从至少一个资源配置中选择一个资源配置(例如第一资源配置)发送给终端设备。Optionally, in one possible implementation, the second indication information indicates the correspondence between the identification information of the first AI model (e.g., identifier 1) and at least one resource configuration, as shown in Table 3. After receiving the second indication information and the identification information of the first AI model (e.g., identifier 1), the network device further determines, based on the identification information of the first AI model reported by the terminal device (i.e., identifier 1) and the second indication information, at least one resource configuration supported by the first AI model corresponding to identifier 1, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.

可选地,在另一种可能的实现方式中,该第二指示信息还可以用于指示第二AI模型的标识信息和第二AI模型支持的至少一个资源配置的对应关系,其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。Alternatively, in another possible implementation, the second indication information may also be used to indicate the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.

如表4所示,第二指示信息指示标识1对应的AI模型与至少一个资源配置的对应关系、标识2对应的AI模型与至少一个资源配置的对应关系、以及标识3对应的AI模型与至少一个资源配置的对应关系。即是说,第二指示信息同时指示3个AI模型的标识信息与其对应的至少一个资源配置的对应关系。进一步地,网络设备在接收到第二指示信息和第一AI模型的标识信息(例如标识1)后,根据标识1以及第二指示信息,确定第一AI模型支持的至少一个资源配置,并从至少一个资源配置中选择一个资源配置(例如第一资源配置)发送给终端设备。As shown in Table 4, the second indication information indicates the correspondence between the AI model corresponding to identifier 1 and at least one resource configuration, the correspondence between the AI model corresponding to identifier 2 and at least one resource configuration, and the correspondence between the AI model corresponding to identifier 3 and at least one resource configuration. That is, the second indication information simultaneously indicates the correspondence between the identifier information of three AI models and their corresponding at least one resource configuration. Further, after receiving the second indication information and the identifier information of the first AI model (e.g., identifier 1), the network device determines at least one resource configuration supported by the first AI model based on identifier 1 and the second indication information, and selects one resource configuration (e.g., the first resource configuration) from the at least one resource configuration and sends it to the terminal device.

可选地,在一种可能的实现方式中,前文所述终端设备向网络设备发送第二指示信息,可以替换为,终端设备向网络设备发送第二信息,该第二信息包括第二指示信息。换句话说,终端设备可以通过第二信息向网络设备上报第二指示信息。Optionally, in one possible implementation, the terminal device sending the second indication information to the network device as described above can be replaced by the terminal device sending second information to the network device, which includes the second indication information. In other words, the terminal device can report the second indication information to the network device through the second information.

具体的,第二信息可以是模型相关信息,或者,第二信息可以是注册请求信息、或者,第二信息可以是终端设备的能力信息。应理解,上述仅为举例说明,本申请对此不作限制。Specifically, the second information may be model-related information, or it may be registration request information, or it may be terminal device capability information. It should be understood that the above are merely illustrative examples, and this application does not impose any limitations on them.

示例性的,在第二信息是模型相关信息的情况下,可以理解为,终端设备是在双边模型对接阶段交互的第二指示信息。示例性的,在第二信息是注册请求消息的情况下,可以理解为,终端设备是在请求接入网络设备的阶段交互的第二指示信息,即是说,终端设备向网络设备发送请求注册到网络的注册请求消息,该注册请求消息中包括第二指示信息。示例性的,在第二信息是终端设备的能力信息的情况下,可以理解为,终端设备在向网络设备上报自身能力信息的同时,上报第二指示信息。For example, when the second information is model-related information, it can be understood as the second instruction information exchanged by the terminal device during the bilateral model docking phase. For example, when the second information is a registration request message, it can be understood as the second instruction information exchanged by the terminal device during the phase of requesting access to the network device; that is, the terminal device sends a registration request message to the network device requesting registration with the network, and this registration request message includes the second instruction information. For example, when the second information is the terminal device's capability information, it can be understood as the terminal device reporting its own capability information to the network device while simultaneously reporting the second instruction information.

可选地,在一种可能的实现方式中,前文所述第一信息还包括第一AI模型的功能信息。Optionally, in one possible implementation, the first information mentioned above also includes functional information of the first AI model.

具体的,终端设备向网络设备发送第一信息,相应地,网络设备接收第一信息。该第一信息中还包括第一AI模型的功能信息,其中,第一AI模型的功能信息可以是时域信道状态信息CSI预测后压缩功能信息、空域和频域CSI压缩功能信息、或者也可以是空域,频域以及时域CSI压缩功能信息、或者也可以是时域CSI预测功能信息、或者也可以是波束时域预测功能信息。应理解,上述仅为举例,本申请对此不作限制。Specifically, the terminal device sends first information to the network device, and the network device receives the first information accordingly. This first information also includes functional information of the first AI model. This functional information can be time-domain channel state information (CSI) prediction followed by compression, spatial and frequency-domain CSI compression, or spatial, frequency, and time-domain CSI compression, or time-domain CSI prediction, or beamforming time-domain prediction. It should be understood that the above are merely examples, and this application does not impose any limitations.

应理解,第一AI模型的功能信息可以理解为第一AI模型能够解决的问题,或者第一AI模型能够执行的任务。It should be understood that the functional information of the first AI model can be understood as the problems that the first AI model can solve, or the tasks that the first AI model can perform.

例如,第一AI模型能够用于时域信道状态信息CSI预测后压缩,如,将若干个历史的CSI或当前的CSI输入第一AI模型,并对输出预测的未来一个或多个时刻的CSI进行压缩。For example, the first AI model can be used for compression of time-domain channel state information (CSI) prediction. For instance, several historical CSIs or the current CSIs can be input into the first AI model, and the output predicted CSIs for one or more future times can be compressed.

例如,第一AI模型能够用于空域和频域的CSI压缩,如,根据第一AI模型对第一资源配置对应的频域信息和空域信息进行联合压缩等。For example, the first AI model can be used for CSI compression in both the spatial and frequency domains, such as jointly compressing the frequency and spatial information corresponding to the first resource configuration based on the first AI model.

例如,第一AI模型能够用于空域、频域以及时域的CSI压缩,如,根据第一AI模型对第一资源配置对应的频域信息、空域信息以及时域信息等进行联合压缩等。For example, the first AI model can be used for CSI compression in the spatial, frequency, and time domains, such as jointly compressing the frequency, spatial, and time domain information corresponding to the first resource configuration based on the first AI model.

例如,第一AI模型能够用于时域信道状态信息的CSI预测,如,将若干个历史的CSI或当前的CSI输入第一AI模型,并输出预测的未来一个或多个时刻的CSI。For example, the first AI model can be used for CSI prediction of time-domain channel state information. For instance, several historical CSIs or the current CSIs can be input into the first AI model, and the predicted CSIs for one or more future moments can be output.

例如,第一AI模型能够用于波束时域预测,如,将第一信道信息输入至第一AI模型,并输出第二信道信息(或者为预测的信道信息)。第一信道信息为第一波束集合中的RSRP,第二信道信息为第二波束集合中的RSRP或者最优的K个波束的标识等。其中,第一波束集合和第二波束集合可以相同或不同。其中,最优的K个波束可以是第二波束集合中信道质量(例如RSRP,SINR)大于某个阈值的K个波束,K为正整数。For example, the first AI model can be used for beam temporal prediction. For instance, it can input first channel information into the first AI model and output second channel information (or predicted channel information). The first channel information is the RSRP in the first beam set, and the second channel information is the RSRP in the second beam set or the identifiers of the optimal K beams, etc. The first and second beam sets can be the same or different. The optimal K beams can be the K beams in the second beam set whose channel quality (e.g., RSRP, SINR) is greater than a certain threshold, where K is a positive integer.

如表5所示,以第一AI模型的功能信息为时域CSI信息压缩为例,终端设备向网络设备发送第一信息,该第一信息中可以包括第一AI模型的标识信息、第一AI模型的功能信息以及第一AI模型支持的资源配置。下面以第一AI模型的标识信息为标识1进行举例。As shown in Table 5, taking the time-domain CSI information compression of the functional information of the first AI model as an example, the terminal device sends the first information to the network device. This first information may include the identification information of the first AI model, the functional information of the first AI model, and the resource configuration supported by the first AI model. The identification information of the first AI model is illustrated below as Identifier 1.

表5
Table 5

从表5可以看出,标识1对应的AI模型的功能信息为时域CSI预测后压缩功能信息、标识1对应的AI模型支持的资源配置(资源配置#1至资源配置#4),关于标识1对应的AI模型支持的资源配置可参考前文所述,这里不予赘述。As can be seen from Table 5, the functional information of the AI model corresponding to Identifier 1 includes the time-domain CSI prediction and compression function information, and the resource configurations supported by the AI model corresponding to Identifier 1 (resource configuration #1 to resource configuration #4). For the resource configurations supported by the AI model corresponding to Identifier 1, please refer to the previous text, which will not be repeated here.

可选地,在一种可能的实现方式中,前文所述第一信息还可以包括第二AI模型的功能信息,其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。可以理解为,终端设备可以向网络设备发送终端设备侧支持的所有的AI模型的功能信息。如表6所示,假设以终端设备侧支持5个AI模型为例进行说明。Optionally, in one possible implementation, the first information mentioned above may further include functional information of the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model. This can be understood as the terminal device sending functional information of all AI models supported by the terminal device to the network device. As shown in Table 6, this is illustrated using an example of the terminal device supporting five AI models.

表6
Table 6

从表6可以看出,标识1对应的AI模型的功能信息为时域CSI预测后压缩功能信息,标识2对应的AI模型的功能信息为空域和频域CSI压缩功能信息,标识3对应的AI模型的功能信息为空域,频域以及时域CSI压缩功能信息,标识4对应的AI模型的功能信息为时域CSI预测功能信息,标识5对应的AI模型的功能信息为波束时域预测功能信息。其中,标识4对应的AI模型支持的非周期配置CSI-RS的相邻资源的偏移为m7,和CSI-RS资源的发送次数为K7,标识5对应的AI模型支持的非周期配置CSI-RS的相邻资源的偏移为m8,和CSI-RS资源的发送次数为K8。需要说明,关于表6中所示的标识1至标识3对应的AI模型支持的资源配置的相关描述可参考前文所述,这里不予赘述。As shown in Table 6, the AI model corresponding to identifier 1 provides time-domain CSI prediction and compression functionality; the AI model corresponding to identifier 2 provides spatial and frequency-domain CSI compression functionality; the AI model corresponding to identifier 3 provides spatial, frequency, and time-domain CSI compression functionality; the AI model corresponding to identifier 4 provides time-domain CSI prediction functionality; and the AI model corresponding to identifier 5 provides beamforming time-domain prediction functionality. Specifically, the AI model corresponding to identifier 4 supports an offset of m7 for adjacent CSI-RS resources with an aperiodic configuration and a transmission count of K7 for CSI-RS resources; the AI model corresponding to identifier 5 supports an offset of m8 for adjacent CSI-RS resources with an aperiodic configuration and a transmission count of K8 for CSI-RS resources. It should be noted that the descriptions of the resource configurations supported by the AI models corresponding to identifiers 1 to 3 shown in Table 6 can be found above and will not be repeated here.

应理解,表1至表6所示仅为举例说明,比如,表6所示的标识1对应的AI模型的功能信息可以是空域和频域CSI压缩功能信息,标识2对应的AI模型的功能信息可以是时域CSI压缩功能信息等,本申请对此不作限制。It should be understood that Tables 1 to 6 are merely illustrative examples. For instance, the functional information of the AI model corresponding to identifier 1 in Table 6 may be spatial and frequency domain CSI compression functional information, and the functional information of the AI model corresponding to identifier 2 may be time domain CSI compression functional information, etc. This application does not impose any restrictions on this.

图7为本申请实施例提供的一种通信方法700的示意性流程图,如图7所示,该方法至少包括以下步骤。Figure 7 is a schematic flowchart of a communication method 700 provided in an embodiment of this application. As shown in Figure 7, the method includes at least the following steps.

可选地,在步骤S720之前,该方法包括:S710,网络设备向终端设备发送第二资源配置,相应地,终端设备接收第二资源配置。Optionally, before step S720, the method includes: S710, the network device sends a second resource configuration to the terminal device, and correspondingly, the terminal device receives the second resource configuration.

S720,终端设备向网络设备发送第二信道报告和第三指示信息,相应地,网络设备接收第二信道报告和第三指示信息。S720, the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.

其中,该第三指示信息指示第二信道报告关联第三AI模型,其中,第二信道报告是基于第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。具体的,终端设备在接收到第二资源配置后,根据第二资源配置测量得到信道信息,并基于第三AI模型对该信道信息进行处理。随后,终端设备向网络设备发送第三指示信息,指示第二信道报告关联第三AI模型,以便网络设备使用与第三AI模型对应的AI模型对第二信道报告进行解压缩,以得到恢复后的信道信息。The third indication information instructs the second channel report to be associated with a third AI model. The second channel report is generated by processing channel information measured from the second resource configuration based on the third AI model. Specifically, after receiving the second resource configuration, the terminal device measures the channel information based on the second resource configuration and processes this channel information using the third AI model. Subsequently, the terminal device sends the third indication information to the network device, instructing the second channel report to be associated with the third AI model, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.

示例性的,在第三AI模型为AI CSI预测后压缩模型的情况下,基于第三AI模型对信道信息进行处理,包括:基于第三AI模型对信道信息进行预测处理,随后对预测处理得到的信道信息进一步进行压缩处理。For example, when the third AI model is an AI CSI prediction and compression model, the channel information is processed based on the third AI model, including: performing prediction processing on the channel information based on the third AI model, and then further compressing the channel information obtained from the prediction processing.

示例性的,在第三AI模型为AI CSI压缩模型的情况下,基于第三AI模型对信道信息进行处理,包括:基于第三AI模型对信道信息进行压缩处理。For example, when the third AI model is an AI CSI compression model, processing the channel information based on the third AI model includes: compressing the channel information based on the third AI model.

需要说明的是,第二信道报告关联第三AI模型可以理解为,第二信道报告是基于第三AI模型处理生成的,其中,“关联”可以替换为“对应”,本申请实施例对此不作限制。It should be noted that the association of the second channel report with the third AI model can be understood as the second channel report being generated based on the processing of the third AI model. Here, "association" can be replaced with "correspondence", and this application embodiment does not limit this.

可选地,在一种可能的实现方式中,第三指示信息可以是第三AI模型的标识信息。具体的,网络设备在接收到第三AI模型的标识信息后,得知第二信道报告是基于第三AI模型生成的,以便网络设备使用与第三AI模型对应的AI模型对第二信道报告进行解压缩,以得到恢复后的信道信息。Optionally, in one possible implementation, the third indication information can be the identification information of the third AI model. Specifically, after receiving the identification information of the third AI model, the network device knows that the second channel report is generated based on the third AI model, so that the network device can use the AI model corresponding to the third AI model to decompress the second channel report to obtain the recovered channel information.

可选地,在步骤S710之前,该方法还可以包括:网络设备确定第二资源配置。需要说明的是,网络设备确定第二资源配置的描述也可以参考前文所述的网络设备确定第一资源配置的相关描述,为了简便,这里不予赘述。Optionally, before step S710, the method may further include: the network device determining a second resource configuration. It should be noted that the description of the network device determining the second resource configuration can also refer to the relevant description of the network device determining the first resource configuration described above; for simplicity, it will not be repeated here.

可选地,在一种可能的实现方式,在步骤S720之前,该方法还可以包括:网络设备向终端设备发送第四指示信息,相应地,终端设备接收第四指示信息。Optionally, in one possible implementation, before step S720, the method may further include: the network device sending fourth indication information to the terminal device, and correspondingly, the terminal device receiving the fourth indication information.

其中,第四指示信息指示第二信道报告的配置信息,第二信道报告的配置信息包括第二信道报告的最大反馈开销。具体的,终端设备在接收到第四指示信息后,得知第二信道报告的最大反馈开销,终端设备选择第三AI模型,并基于第三AI模型对第二资源配置测量得到的信道信息进行处理并生成第二信道报告。The fourth indication information specifies the configuration information for the second channel report, including its maximum feedback overhead. Specifically, after receiving the fourth indication information, the terminal device learns the maximum feedback overhead of the second channel report, selects the third AI model, processes the channel information obtained from the second resource configuration measurement based on the third AI model, and generates the second channel report.

示例性的,第二信道报告的配置信息包括的第二信道报告的最大反馈开销可以是M比特(bits)。For example, the configuration information for the second channel report may include a maximum feedback overhead of M bits.

可选地,在一种可能的实现方式中,在步骤S720之前,该方法还可以包括:终端设备根据第二资源配置确定第二信道报告的信息。具体的,终端设备基于第二资源配置测量得到信道信息,并基于第三AI模型对信道信息进行处理生成第二信道报告。Optionally, in one possible implementation, before step S720, the method may further include: the terminal device determining information for the second channel report based on the second resource configuration. Specifically, the terminal device measures channel information based on the second resource configuration and processes the channel information based on a third AI model to generate the second channel report.

需要说明的是,在本申请实施例中,第二信道报告的信息包括以下至少一项:It should be noted that, in the embodiments of this application, the information reported by the second channel includes at least one of the following:

一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources

一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources

一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources

一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources

一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources

一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources.

还需说明,在本申请实施例中,终端设备侧有多个AI模型,基于不同的AI模型生成的信道报告的信息是不一样的。例如,如表7所示,以终端设备侧有3个AI模型为例进行说明。It should also be noted that in this embodiment, the terminal device has multiple AI models, and the information in the channel reports generated based on different AI models is different. For example, as shown in Table 7, the example of three AI models on the terminal device side will be used for illustration.

表7
Table 7

从表7可以看出,标识6对应的AI模型的功能信息为时域预测后压缩功能信息,标识7对应的AI模型的功能信息为空域+频域信息压缩功能信息,标识8对应的AI模型的功能信息为时域预测压缩空域+频域+时域信息压缩功能信息。那么,这三个AI模型分别基于第二资源配置测量得到的信道信息生成的信道报告的信息也是不同的,如表8所示。As shown in Table 7, the AI model corresponding to identifier 6 has a time-domain prediction and compression function, the AI model corresponding to identifier 7 has a spatial and frequency-domain information compression function, and the AI model corresponding to identifier 8 has a time-domain prediction and spatial, frequency, and time-domain information compression function. Therefore, the channel reports generated by these three AI models based on the channel information obtained from the second resource configuration measurement are also different, as shown in Table 8.

表8
Table 8

从表8可以看出,基于标识6对应的AI模型生成的信道报告的信息包括:预测的是4个slot的CSI,即包括了测量资源对应的时域信息为4个slot;以及信道报告的总开销M,即可以包括多个测量资源中每个资源信息对应的比特个数为M/4。基于标识7对应的AI模型生成的信道报告的信息包括:预测的是1个slot的CSI,即包括了测量资源对应的时域信息为1个slot;以及信道报告的总开销M,即可以包括多个测量资源中每个资源信息对应的比特个数为M。基于标识8对应的AI模型生成的信道报告的信息包括:预测的是当前测量的slot的CSI,即包括了测量资源对应的时域信息为当前测量的slot;以及信道报告的总开销M,即可以包括多个测量资源中每个资源信息对应的比特个数。As shown in Table 8, the channel report generated based on the AI model corresponding to identifier 6 includes: a predicted CSI for 4 slots, meaning it includes the time-domain information corresponding to the measurement resource for 4 slots; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources, which is M/4. The channel report generated based on the AI model corresponding to identifier 7 includes: a predicted CSI for 1 slot, meaning it includes the time-domain information corresponding to the measurement resource for 1 slot; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources, which is M. The channel report generated based on the AI model corresponding to identifier 8 includes: a predicted CSI for the currently measured slot, meaning it includes the time-domain information corresponding to the measurement resource for the currently measured slot; and the total cost M of the channel report, which can include the number of bits corresponding to each resource information in multiple measurement resources.

应理解,表7和表8所示仅为举例,本申请对此不作限制。It should be understood that Tables 7 and 8 are merely examples and are not intended to limit the scope of this application.

需要说明的是,在一种可能的情况下,第三AI模型也可以是前文所述的第一AI模型,相对应地,第二信道报告也可以是前文所述的第一信道报告,第二资源配置可以是前文所述的第一资源配置。换句话说,图6和图7所示的实施例在某种情况下可以结合使用,如图8所示,图8为本申请实施例提供的一种通信方法800的示意性流程图,该方法至少包括以下步骤。It should be noted that, in one possible scenario, the third AI model can also be the first AI model described above, and correspondingly, the second channel report can also be the first channel report described above, and the second resource configuration can be the first resource configuration described above. In other words, the embodiments shown in Figures 6 and 7 can be used in combination under certain circumstances. As shown in Figure 8, Figure 8 is a schematic flowchart of a communication method 800 provided by an embodiment of this application, which includes at least the following steps.

S810,网络设备确定第二资源配置。S810, the network device determines the second resource configuration.

步骤S810类似于步骤S610,这里不予赘述。Step S810 is similar to step S610, and will not be described in detail here.

S820,网络设备发送第二资源配置,相应地,终端设备接收第二资源配置。In S820, the network device sends the second resource configuration, and the terminal device receives the second resource configuration accordingly.

具体的,终端设备基于AI模型(例如AI模型#3)对第二资源配置测量得到的信道信息进行处理,生成第二信道报告。Specifically, the terminal device processes the channel information obtained from the second resource configuration measurement based on an AI model (e.g., AI model #3) to generate a second channel report.

S830,终端设备向网络设备发送第二信道报告和第三指示信息,相应地,网络设备接收第二信道报告和第三指示信息。S830, the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.

需要说明,步骤S830类似于步骤S720,这里不予赘述。It should be noted that step S830 is similar to step S720, and will not be described in detail here.

可选地,在一种可能的实现方式中,在步骤S830之前,该方法还可以包括S840,网络设备向终端设备发送第四指示信息,相应地,终端设备接收第四指示信息。需要说明,步骤S840可参考在前文步骤S720之前,网络设备向终端设备发送第四指示信息这一步骤的相关描述,这里不予赘述。Optionally, in one possible implementation, before step S830, the method may further include S840, in which the network device sends fourth indication information to the terminal device, and correspondingly, the terminal device receives the fourth indication information. It should be noted that step S840 can be referenced to the relevant description of the step before step S720 above, in which the network device sends the fourth indication information to the terminal device, and will not be repeated here.

图9为本申请实施例提供的一种通信方法900的示意性流程图,如图9所示,该方法至少包括以下步骤。Figure 9 is a schematic flowchart of a communication method 900 provided in an embodiment of this application. As shown in Figure 9, the method includes at least the following steps.

S910,终端设备向网络设备发送第一信息,相应地,网络设备接收该第一信息。其中,该第一信息包括第一AI模型支持的至少一个资源配置。S910, the terminal device sends first information to the network device, and the network device receives the first information accordingly. The first information includes at least one resource configuration supported by the first AI model.

具体的,终端设备向网络设备发送第一信息,该第一信息包括终端设备侧的第一AI模型支持的至少一个资源配置,即是说,终端设备向网络设备上报第一AI模型支持的至少一个资源配置。需要说明,关于终端设备上报第一AI模型支持的至少一个资源配置的具体描述可参考表1的相关描述,这里不予赘述。Specifically, the terminal device sends first information to the network device. This first information includes at least one resource configuration supported by the first AI model on the terminal device side. That is, the terminal device reports at least one resource configuration supported by the first AI model to the network device. It should be noted that a detailed description of the at least one resource configuration supported by the first AI model reported by the terminal device can be found in the relevant description in Table 1, and will not be repeated here.

可选地,在一种可能的实现方式中,第一信息还可以包括第一AI模型的标识信息,即是说,在这种情况下,第一信息可以包括第一AI模型的标识信息以及第一AI模型支持的至少一个资源配置。例如,第一信息的具体内容可参考表3所示,这里不予赘述。Optionally, in one possible implementation, the first information may also include the identification information of the first AI model. That is, in this case, the first information may include the identification information of the first AI model and at least one resource configuration supported by the first AI model. For example, the specific content of the first information can be found in Table 3, and will not be elaborated here.

可选地,在一种可能的实现方式中,第一信息还可以包括第一AI模型的功能信息。即是说,在这种情况下,第一信息可以包括第一AI模型的标识信息、第一AI模型的功能信息以及第一AI模型支持的至少一个资源配置。例如,第一信息的具体内容可参考表5所示,这里不予赘述。Optionally, in one possible implementation, the first information may also include functional information of the first AI model. That is, in this case, the first information may include the identification information of the first AI model, the functional information of the first AI model, and at least one resource configuration supported by the first AI model. For example, the specific content of the first information can be found in Table 5, and will not be elaborated here.

可选地,在一种可能的实现方式中,第一信息还可以包括第二AI模型支持的至少一个资源配置,其中,第二AI模型可以理解为不同于第一AI模型的一个或多个AI模型。Optionally, in one possible implementation, the first information may further include at least one resource configuration supported by the second AI model, wherein the second AI model can be understood as one or more AI models different from the first AI model.

具体的,终端设备向网络设备发送第一信息,该第一信息包括终端设备侧的第一AI模型支持的至少一个资源配置、以及终端设备侧的其他AI模型(第二AI模型)支持的至少一个资源配置,即是说,终端设备向网络设备上报第一AI模型支持的至少一个资源配置以及其他AI模型支持的至少一个资源配置。可选地,在某种情况下,终端设备向网络设备上报终端设备侧所有AI模型支持的至少一个资源配置。Specifically, the terminal device sends first information to the network device. This first information includes at least one resource configuration supported by a first AI model on the terminal device side, and at least one resource configuration supported by other AI models (second AI models) on the terminal device side. That is, the terminal device reports to the network device at least one resource configuration supported by the first AI model and at least one resource configuration supported by other AI models. Optionally, in some cases, the terminal device reports to the network device at least one resource configuration supported by all AI models on the terminal device side.

需要说明,关于终端设备上报第一AI模型支持的至少一个资源配置以及其他AI模型支持的至少一个资源配置的具体描述可参考前文以及表2的相关描述,这里不予赘述。It should be noted that for a detailed description of at least one resource configuration supported by the first AI model reported by the terminal device and at least one resource configuration supported by other AI models, please refer to the relevant descriptions in the preceding text and Table 2, which will not be repeated here.

可选地,在一种可能的实现方式中,第一信息还可以包括第一AI模型的标识信息以及第二AI模型的标识信息。即是说,在这种情况下,第一信息可以包括第一AI模型的标识信息、第一AI模型支持的至少一个资源配置、第二AI模型的标识信息以及第二AI模型支持的至少一个资源配置。例如,第一信息的具体内容可参考表4所示,这里不予赘述。Optionally, in one possible implementation, the first information may further include the identification information of the first AI model and the identification information of the second AI model. That is, in this case, the first information may include the identification information of the first AI model, at least one resource configuration supported by the first AI model, the identification information of the second AI model, and at least one resource configuration supported by the second AI model. For example, the specific content of the first information can be found in Table 4, and will not be elaborated here.

可选地,在一种可能的实现方式中,第一信息还可以包括第一AI模型的功能信息以及第二AI模型的功能信息。即是说,在这种情况下,第一信息可以包括第一AI模型的标识信息、第一AI模型的功能信息、第一AI模型支持的至少一个资源配置、第二AI模型的标识信息、第二AI模型的功能信息以及第二AI模型支持的至少一个资源配置。例如,第一信息的具体内容可参考表6所示,这里不予赘述。Optionally, in one possible implementation, the first information may further include functional information of the first AI model and functional information of the second AI model. That is, in this case, the first information may include the identification information of the first AI model, the functional information of the first AI model, at least one resource configuration supported by the first AI model, the identification information of the second AI model, the functional information of the second AI model, and at least one resource configuration supported by the second AI model. For example, the specific content of the first information can be found in Table 6, and will not be elaborated here.

S920,网络设备向终端设备发送第一资源配置,相应地,终端设备接收第一资源配置。In S920, the network device sends a first resource configuration to the terminal device, and the terminal device receives the first resource configuration accordingly.

具体的,网络设备在接收到第一信息后,确定终端设备侧的AI模型所支持的至少一个资源配置,并从终端设备侧的AI模型所支持的至少一个资源配置中选择一个资源配置发送给终端设备。例如,网络设备选择第一资源配置发送给终端设备,其中,第一资源配置为第一AI模型支持的至少一个资源配置中的一个。换句话说,终端设备从第一AI模型支持的至少一个资源配置中选择第一资源配置发送给终端设备。Specifically, after receiving the first information, the network device determines at least one resource configuration supported by the AI model on the terminal device side, and selects one resource configuration from the at least one resource configuration supported by the AI model on the terminal device side and sends it to the terminal device. For example, the network device selects a first resource configuration to send to the terminal device, wherein the first resource configuration is one of the at least one resource configuration supported by the first AI model. In other words, the terminal device selects a first resource configuration from the at least one resource configuration supported by the first AI model and sends it to the terminal device.

S930,网络设备向终端设备发送参考信号CSI-RS,相应地,终端设备接收参考信号CSI-RS。S930, the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.

S940,终端设备基于接收到的第一资源配置进行测量,得到信道信息。S940, the terminal device performs measurements based on the received first resource configuration to obtain channel information.

S950,终端设备基于第一AI模型对信道信息进行处理,得到第一信道报告。S950: The terminal device processes the channel information based on the first AI model to obtain the first channel report.

其中,第一AI模型可以是AI CSI预测后压缩模型或者AI CSI压缩模型。The first AI model can be either an AI CSI prediction post-compressed model or an AI CSI compressed model.

示例性的,在第一AI模型为AI CSI压缩模型的情况下,终端设备基于第一AI模型对信道信息进行处理,包括:终端设备基于第一AI模型对信道信息进行压缩处理。即是说,在此示例下,第一信道报告是基于第一AI模型对信道信息进行压缩处理生成的。For example, when the first AI model is an AI CSI compression model, the terminal device processes the channel information based on the first AI model, including: the terminal device compresses the channel information based on the first AI model. That is, in this example, the first channel report is generated based on the compression processing of the channel information using the first AI model.

示例性的,在第一AI模型为AI CSI预测后压缩模型的情况下,终端设备基于第一AI模型对信道信息进行处理,包括:终端设备基于第一AI模型对信道信息进行预测处理,随后对预测处理得到的信道信息进一步进行压缩处理。即是说,在此示例下,第一信道报告是基于第一AI模型对信道信息进行预测处理,随后对预测处理得到的信道信息进一步进行压缩处理生成的。For example, when the first AI model is an AI CSI prediction and compression model, the terminal device processes the channel information based on the first AI model, including: the terminal device performs prediction processing on the channel information based on the first AI model, and then further compresses the channel information obtained from the prediction processing. That is, in this example, the first channel report is generated based on the prediction processing of the channel information using the first AI model, followed by further compression processing of the channel information obtained from the prediction processing.

S960,终端设备向网络设备发送第一信道报告,相应地,网络设备接收第一信道报告。S960, the terminal device sends a first channel report to the network device, and the network device receives the first channel report accordingly.

S970,网络设备基于AI模型*1对第一信道报告进行解压缩,得到恢复后的信道信息。S970, the network device decompresses the first channel report based on AI model*1 to obtain the recovered channel information.

其中,AI模型*1与第一AI模型是匹配的,或者说,AI模型*1与第一AI模型是一一对应的。Among them, AI model *1 is matched with the first AI model, or in other words, AI model *1 and the first AI model are in one-to-one correspondence.

根据上述技术方案,终端设备向网络设备上报了终端设备侧的AI模型支持的至少一个资源配置,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障CSI的反馈性能。According to the above technical solution, the terminal device reports at least one resource configuration supported by the AI model on the terminal device side to the network device, which can guarantee or improve the performance of the AI model when the network device makes different resource configurations to the terminal device, and further guarantee the feedback performance of CSI.

图10为本申请实施例提供的一种通信方法1000的示意性流程图,如图10所示,该方法至少包括以下步骤。需要提前说明的是,步骤S1010至步骤S1040类似于前文所述步骤S910至步骤S940,为了简便,在下文中仅介绍与前文图9所示实施例的不同之处,这里不予赘述。Figure 10 is a schematic flowchart of a communication method 1000 provided in an embodiment of this application. As shown in Figure 10, the method includes at least the following steps. It should be noted in advance that steps S1010 to S1040 are similar to steps S910 to S940 described above. For simplicity, only the differences from the embodiment shown in Figure 9 above will be described below, and will not be repeated here.

S1050,终端设备基于第一AI模型对信道信息进行预测,得到预测后的信道信息。S1050, the terminal device predicts the channel information based on the first AI model to obtain the predicted channel information.

其中,第一AI模型可以是AI CSI预测模型或AI波束时域预测模型。The first AI model can be either an AI CSI prediction model or an AI beam temporal prediction model.

根据上述技术方案,终端设备向网络设备上报了终端设备侧的AI模型支持的至少一个资源配置,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的预测性能。According to the above technical solution, the terminal device reports at least one resource configuration supported by the AI model on the terminal device side to the network device, which can guarantee or improve the predictive performance of the AI model when the network device makes different resource configurations to the terminal device.

图11为本申请实施例提供的一种通信方法1100的示意性流程图,如图11所示,该方法至少包括以下步骤。Figure 11 is a schematic flowchart of a communication method 1100 provided in an embodiment of this application. As shown in Figure 11, the method includes at least the following steps.

S1110,网络设备向终端设备发送第二资源配置,相应地,终端设备接收第二资源配置。S1110, the network device sends the second resource configuration to the terminal device, and the terminal device receives the second resource configuration accordingly.

S1120,网络设备向终端设备发送第二信道报告的配置信息,相应地,终端设备接收第二信道报告的配置信息。S1120, the network device sends the configuration information of the second channel report to the terminal device, and the terminal device receives the configuration information of the second channel report accordingly.

具体的,该第二信道报告的配置信息包括第二信道报告的最大反馈开销,终端设备在接收到第二信道报告的配置信息后,得知第二信道报告的最大反馈开销。Specifically, the configuration information of the second channel report includes the maximum feedback overhead of the second channel report. After receiving the configuration information of the second channel report, the terminal device knows the maximum feedback overhead of the second channel report.

S1130,网络设备向终端设备发送参考信号CSI-RS,相应地,终端设备接收参考信号CSI-RS。S1130, the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.

S1140,终端设备基于第二资源配置进行测量,得到信道信息。S1140, the terminal device performs measurements based on the second resource configuration to obtain channel information.

S1150,终端设备基于第三AI模型对信道信息进行处理,得到第二信道报告。S1150, the terminal device processes the channel information based on the third AI model to obtain the second channel report.

具体的,终端设备在得知第二信道报告的最大反馈开销时,终端设备基于第二信道报告的最大反馈开销选择第三AI模型,并基于第三AI模型对第二资源配置测量得到的信道信息进行处理,生成第二信道报告。其中,第三AI模型可以是AI CSI预测后压缩模型或AI CSI压缩模型。Specifically, when the terminal device learns the maximum feedback cost of the second channel report, it selects a third AI model based on the maximum feedback cost of the second channel report, and processes the channel information obtained from the second resource configuration measurement based on the third AI model to generate the second channel report. The third AI model can be an AI CSI prediction-compressed model or an AI CSI compressed model.

示例性的,在第三AI模型为AI CSI压缩模型的情况下,终端设备通过第三AI模型对信道信息进行压缩处理,进一步生成第二信道报告。For example, when the third AI model is an AI CSI compression model, the terminal device compresses the channel information using the third AI model and then generates a second channel report.

示例性的,在第一AI模型为AI CSI预测后压缩模型的情况下,终端设备通过第一AI模型对信道信息进行预测处理,随后对预测处理得到的信道信息进一步进行压缩处理,进一步生成第二信道报告。For example, when the first AI model is an AI CSI prediction and compression model, the terminal device performs prediction processing on the channel information using the first AI model, and then further compresses the channel information obtained from the prediction processing to generate a second channel report.

S1160,终端设备向网络设备发送第二信道报告以及第三指示信息,相应地,网络设备接收第二信道报告以及第三指示信息。S1160, the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.

其中,该第三指示信息指示第二信道报告关联第三AI模型,具体的,终端设备向网络设备发送第三指示信息,指示第二信道报告关联第三AI模型,网络设备在接收到第三指示信息以及第二信道报告后,获知第二信道报告是基于第三AI模型处理生成的,以便网络设备使用与第三AI模型对应的AI模型对第二信道报告进行解压缩,以得到恢复后的信道信息。Specifically, the third instruction information instructs the second channel report to be associated with the third AI model. Specifically, the terminal device sends the third instruction information to the network device, instructing the second channel report to be associated with the third AI model. After receiving the third instruction information and the second channel report, the network device knows that the second channel report is generated based on the third AI model. This allows the network device to use the AI model corresponding to the third AI model to decompress the second channel report and obtain the recovered channel information.

S1170,网络设备基于AI模型*2对第二信道报告进行解压缩,得到恢复后的信道信息。S1170, the network device decompresses the second channel report based on AI model*2 to obtain the recovered channel information.

其中,AI模型*2与第三AI模型相匹配,或者说,AI模型*2与第三AI模型一一对应。Among them, AI Model *2 is matched with the third AI model, or in other words, AI Model *2 corresponds one-to-one with the third AI model.

需要说明,在本申请实施例中,终端设备侧可以有多个AI模型,基于不同的AI模型生成的第二信道报告的信息是不一样的。例如,具体描述可参考前文表7和表8所示,这里不予赘述。It should be noted that in this embodiment, the terminal device may have multiple AI models, and the information in the second channel report generated based on different AI models will be different. For example, specific descriptions can be found in Tables 7 and 8 above, which will not be repeated here.

根据上述技术方案,终端设备实时匹配网络设备发送的信道报告的配置信息,挑选出最合适的AI模型,能够保障或提高网络设备向终端设备进行不同的资源配置时AI模型的性能,并进一步保障了CSI反馈的性能。According to the above technical solution, the terminal device matches the configuration information of the channel report sent by the network device in real time, selects the most suitable AI model, which can guarantee or improve the performance of the AI model when the network device makes different resource configurations to the terminal device, and further guarantee the performance of CSI feedback.

图6至图11所示的内容是以第一装置为终端设备,第二装置为网络设备为例进行描述的,下文结合图12至图14对第一装置和第二装置分别为服务于终端设备和网络设备的AI实体的场景进行描述。其中,终端设备侧的AI实体可以为终端设备本身,也可以为服务于终端设备的AI实体,例如,服务器,比如:OTT服务器或云端服务器。网络设备侧的AI实体可以为网络设备本身,也可以为服务于网络设备的AI实体,例如RAN,RIC,OAM,服务器等,例如云端服务器。其中,网络设备的AI实体也可以替换为智能网元等,对此不予限定。需要说明,智能网元也可以理解为带有AI功能的网络设备,可以应用于O-RAN架构中,也可以应用于非O-RAN架构中,对此不予限定。Figures 6 to 11 illustrate the scenario where the first device is the terminal device and the second device is the network device. The following description, in conjunction with Figures 12 to 14, illustrates scenarios where the first and second devices serve as AI entities for the terminal device and network device, respectively. The AI entity on the terminal device side can be the terminal device itself or an AI entity serving the terminal device, such as a server, like an OTT server or a cloud server. The AI entity on the network device side can be the network device itself or an AI entity serving the network device, such as RAN, RIC, OAM, or a server, like a cloud server. The AI entity on the network device can also be replaced by intelligent network elements, etc., without limitation. It should be noted that intelligent network elements can also be understood as network devices with AI capabilities, applicable in both O-RAN and non-O-RAN architectures, without limitation.

图12是本申请实施例的又一种通信方法1200的示意性流程图。如图12所示,第一装置为OTT,第二装置为近实时RIC,该方法包括:Figure 12 is a schematic flowchart of another communication method 1200 according to an embodiment of this application. As shown in Figure 12, the first device is an OTT, the second device is a near real-time RIC, and the method includes:

S1201,OTT向终端设备发送第一信息,相应地,终端设备接收第一信息。S1201, the OTT sends the first information to the terminal device, and the terminal device receives the first information accordingly.

S1202,终端设备向网络设备发送第一信息,相应地,网络设备接收第一信息。S1202, the terminal device sends the first information to the network device, and the network device receives the first information accordingly.

S1203,网络设备向近实时RIC发送第一信息,相应地,近实时RIC接收第一信息。S1203, the network device sends the first information to the near real-time RIC, and correspondingly, the near real-time RIC receives the first information.

S1204,网络设备向终端设备发送第一资源配置,相应地,终端设备接收第一资源配置。S1204, the network device sends the first resource configuration to the terminal device, and the terminal device receives the first resource configuration accordingly.

S1205,网络设备向终端设备发送参考信号CSI-RS,相应地,终端设备接收参考信号CSI-RS。S1205, the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.

S1206,终端设备基于接收到的第一资源配置进行测量,得到信道信息。S1206, the terminal device performs measurements based on the received first resource configuration to obtain channel information.

关于步骤S1204至步骤S1206的相关描述可以参见步骤S920至步骤S940,这里不予赘述。For a description of steps S1204 to S1206, please refer to steps S920 to S940, which will not be repeated here.

S1207,终端设备向OTT发送信道信息,相应地,OTT接收信道信息。S1207, the terminal device sends channel information to the OTT, and the OTT receives the channel information accordingly.

S1208,OTT基于第一AI模型对信道信息进行处理,得到第一信道报告。S1208, OTT processes the channel information based on the first AI model to obtain the first channel report.

关于步骤S1208的相关描述可参考前文步骤S950的相关描述,这里不予赘述。For a description of step S1208, please refer to the description of step S950 above, which will not be repeated here.

S1209,OTT向终端设备发送第一信道报告,相应地,终端设备接收第一信道报告。S1209, the OTT sends a first channel report to the terminal device, and the terminal device receives the first channel report accordingly.

S1210,终端设备向网络设备发送第一信道报告,相应地,网络设备接收第一信道报告。S1210, the terminal device sends a first channel report to the network device, and the network device receives the first channel report accordingly.

S1211,网络设备向近实时RIC发送第一信道报告,相应地,近实时RIC接收第一信道报告。S1211, the network device sends the first channel report to the near real-time RIC, and correspondingly, the near real-time RIC receives the first channel report.

S1212,近实时RIC基于AI模型*1对第一信道报告进行解压缩,得到恢复后的信道信息。S1212, the near real-time RIC decompresses the first channel report based on AI model*1 to obtain the recovered channel information.

其中,AI模型*1与第一AI模型是匹配的,或者说,AI模型*1与第一AI模型是一一对应的。Among them, AI model *1 is matched with the first AI model, or in other words, AI model *1 and the first AI model are in one-to-one correspondence.

图13是本申请实施例的又一种通信方法1300的示意性流程图。如图13所示,第一装置为OTT,第二装置为近实时RIC,该方法包括以下步骤。需要提前说明的是,步骤S1301至步骤S1307类似于前文所述步骤S1201至步骤S1207,为了简便,在下文中仅介绍与前文图12所示实施例的不同之处,这里不予赘述。Figure 13 is a schematic flowchart of another communication method 1300 according to an embodiment of this application. As shown in Figure 13, the first device is an OTT and the second device is a near real-time RIC. The method includes the following steps. It should be noted in advance that steps S1301 to S1307 are similar to steps S1201 to S1207 described above. For simplicity, only the differences from the embodiment shown in Figure 12 above will be described below, and will not be repeated here.

S1308,OTT基于第一AI模型对信道信息进行预测,得到预测后的信道信息。S1308, OTT predicts channel information based on the first AI model to obtain the predicted channel information.

其中,第一AI模型可以是AI CSI预测或AI波束时域预测模型。The first AI model can be either an AI CSI prediction model or an AI beam temporal prediction model.

图14是本申请实施例的又一种通信方法1400的示意性流程图。如图14所示,第一装置为OTT,第二装置为近实时RIC,该方法包括:Figure 14 is a schematic flowchart of another communication method 1400 according to an embodiment of this application. As shown in Figure 14, the first device is an OTT, the second device is a near real-time RIC, and the method includes:

S1401,网络设备向终端设备发送第二资源配置,相应地,终端设备接收第二资源配置。S1401, the network device sends the second resource configuration to the terminal device, and the terminal device receives the second resource configuration accordingly.

S1402,网络设备向终端设备发送第二信道报告的配置信息,相应地,终端设备接收第二信道报告的配置信息。S1402, the network device sends the configuration information of the second channel report to the terminal device, and the terminal device receives the configuration information of the second channel report accordingly.

关于S1402的相关描述可参考前文步骤S1120的相关描述,这里不予赘述。For a description of S1402, please refer to the description of step S1120 above, which will not be repeated here.

S1403,网络设备向终端设备发送参考信号CSI-RS,相应地,终端设备接收参考信号CSI-RS。S1403, the network device sends a reference signal CSI-RS to the terminal device, and the terminal device receives the reference signal CSI-RS accordingly.

S1404,终端设备基于第二资源配置进行测量,得到信道信息。S1404, the terminal device performs measurements based on the second resource configuration to obtain channel information.

S1405,终端设备向OTT发送信道信息,相应地,OTT接收信道信息。S1405, the terminal device sends channel information to the OTT, and the OTT receives the channel information accordingly.

S1406,OTT基于第三AI模型对信道信息进行处理,得到第二信道报告。S1406, OTT processes the channel information based on the third AI model to obtain the second channel report.

关于步骤S1406的相关描述可参考前文步骤S1150的相关描述,这里不予赘述。For a description of step S1406, please refer to the description of step S1150 above, which will not be repeated here.

S1407,OTT向终端设备发送第二信道报告和第三指示信息,相应地,终端设备接收第二信道报告和第三指示信息。S1407, the OTT sends a second channel report and a third indication information to the terminal device, and the terminal device receives the second channel report and the third indication information accordingly.

S1408,终端设备向网络设备发送第二信道报告以及第三指示信息,相应地,网络设备接收第二信道报告以及第三指示信息。S1408, the terminal device sends a second channel report and a third indication information to the network device, and the network device receives the second channel report and the third indication information accordingly.

关于步骤S1407和步骤S1408的相关描述可参考前文步骤S1160的相关描述,这里不予赘述。For a description of steps S1407 and S1408, please refer to the description of step S1160 above, which will not be repeated here.

S1409,网络设备向近实时RIC发送第二信道报告以及第三指示信息,相应地,近实时RIC接收第二信道报告以及第三指示信息。S1409, the network device sends a second channel report and a third indication information to the near real-time RIC, and the near real-time RIC receives the second channel report and the third indication information accordingly.

S1410,近实时RIC基于AI模型*2对第二信道报告进行解压缩,得到恢复后的信道信息。S1410, the near real-time RIC decompresses the second channel report based on AI model*2 to obtain the recovered channel information.

其中,AI模型*2与第三AI模型相匹配,或者说,AI模型*2与第三AI模型一一对应。Among them, AI Model *2 is matched with the third AI Model, or in other words, AI Model *2 corresponds one-to-one with the third AI Model.

最后对本申请实施例的装置实施例进行介绍。Finally, the device embodiments of this application will be described.

为了实现本申请提供的方法中的各功能,第一装置和第二装置均可以包括硬件结构和/或软件模块,以硬件结构、软件模块、或硬件结构加软件模块的形式来实现上述各功能。上述各功能中的某个功能以硬件结构、软件模块、还是硬件结构加软件模块的方式来执行,取决于技术方案的特定应用和设计约束条件。To achieve the functions of the methods provided in this application, both the first device and the second device may include hardware structures and/or software modules, implementing the aforementioned functions in the form of hardware structures, software modules, or a combination of hardware structures and software modules. Whether a particular function is executed in the form of hardware structures, software modules, or a combination of hardware structures and software modules depends on the specific application and design constraints of the technical solution.

图15是本申请实施例的通信装置的一种示意框图。该通信装置包括处理电路1510和收发电路1520,处理电路1510和收发电路1520可以相互连接或耦合,比如通过总线1530相互连接。该通信装置可以为第一装置或者第二装置。Figure 15 is a schematic block diagram of a communication device according to an embodiment of this application. The communication device includes a processing circuit 1510 and a transceiver circuit 1520, which can be interconnected or coupled to each other, for example, through a bus 1530. The communication device can be a first device or a second device.

可选地,该通信装置还可以包括存储器1540。存储器1540包括但不限于是随机存储记忆体(random access memory,RAM)、只读存储器(read-only memory,ROM)、可擦除可编程只读存储器(erasable programmable read only memory,EPROM)、或便携式只读存储器(compact disc read-only memory,CD-ROM),该存储器1540用于相关指令及数据。Optionally, the communication device may also include a memory 1540. The memory 1540 includes, but is not limited to, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM), or compact disc read-only memory (CD-ROM), which is used for related instructions and data.

处理电路1510可以是一个或多个处理器中的全部或部分处理或控制电路,或者,为一个或多个处理器。其中,处理器可以为中央处理器(central processing unit,CPU)。在处理电路1510是一个CPU的情况下,该CPU可以是单核CPU,也可以是多核CPU。The processing circuit 1510 may be all or part of the processing or control circuitry of one or more processors, or it may be one or more processors. The processor may be a central processing unit (CPU). If the processing circuit 1510 is a CPU, the CPU may be a single-core CPU or a multi-core CPU.

处理电路1510可以是信号处理器、芯片,或其他可以实现本申请方法的集成电路,或者前述处理器、芯片或集成电路中的用于处理功能的部分电路。The processing circuit 1510 may be a signal processor, a chip, or other integrated circuit that can implement the method of this application, or a portion of the aforementioned processor, chip, or integrated circuit used for processing functions.

收发电路1520也可以为收发器,或是,输入输出接口,输入输出接口用于信号或数据的输入或输出,也可以被称为输入输出电路。The transceiver circuit 1520 can also be a transceiver, or an input/output interface. An input/output interface is used for the input or output of signals or data and can also be called an input/output circuit.

当该通信装置是第一装置,收发电路1520用于执行以下操作:接收第一指示信息,该第一指示信息指示第一资源配置,该第一资源配置为第一人工智能AI模型支持的至少一个资源配置中的一个。When the communication device is the first device, the transceiver circuit 1520 is configured to perform the following operations: receive first indication information, which indicates a first resource configuration, which is one of at least one resource configuration supported by a first artificial intelligence (AI) model.

可选地,在一些示例中,处理电路1510用于确定第一信道报告,该第一信道报告是基于该第一AI模型对该第一资源配置测量得到的信道信息进行处理确定的。Optionally, in some examples, the processing circuit 1510 is used to determine a first channel report, which is determined based on the channel information obtained by the first AI model from the measurement of the first resource configuration.

当该通信装置是第二装置,处理电路1510用于执行以下操作:确定第一资源配置,该第一资源配置为第一AI模型支持的至少一个资源配置中的一个,该第一AI模型用于对基于该第一资源配置测量得到的信道信息进行处理。When the communication device is a second device, the processing circuit 1510 is configured to perform the following operations: determine a first resource configuration, which is one of at least one resource configuration supported by a first AI model, the first AI model being configured to process channel information measured based on the first resource configuration.

收发电路1520用于执行以下操作:发送第一指示信息,该第一指示信息指示该第一资源配置。The transceiver circuit 1520 is configured to perform the following operations: send a first indication message that indicates the first resource configuration.

上述所述内容仅作为示例性描述。该通信装置是第一装置或第二装置时,其将负责执行前述方法实施例中与第一装置或第二装置相关的方法或者步骤。The above description is for illustrative purposes only. When the communication device is the first device or the second device, it will be responsible for executing the methods or steps related to the first device or the second device in the foregoing method embodiments.

当该通信装置为第一装置或第二装置,收发电路1520可以为收发器或接口电路。When the communication device is the first device or the second device, the transceiver circuit 1520 can be a transceiver or an interface circuit.

当该通信装置为用于第一装置或第二装置的芯片,收发电路1520可以为输入输出电路。When the communication device is a chip used in the first device or the second device, the transceiver circuit 1520 can be an input/output circuit.

具体内容可以参见上述方法实施例所示的内容。For details, please refer to the content shown in the above method embodiments.

图15中的各个操作的实现还可以对应参照图6至图14中所示的方法实施例的相应描述。The implementation of each operation in Figure 15 can also be described in the corresponding description of the method embodiments shown in Figures 6 to 14.

图16是本申请实施例的通信装置的另一种示意框图。该通信装置可以为第一装置或第二装置,用于实现上述实施例涉及的方法。Figure 16 is another schematic block diagram of a communication device according to an embodiment of this application. This communication device can be a first device or a second device, used to implement the methods involved in the above embodiments.

该通信装置包括收发单元1610和处理单元1620。收发单元1610可以包括发送单元和接收单元。发送单元用于执行通信装置的发送动作,接收单元用于执行通信装置的接收动作。为便于描述,本申请实施例将发送单元与接收单元合为一个收发单元。在此做统一说明,后文不再赘述。The communication device includes a transceiver unit 1610 and a processing unit 1620. The transceiver unit 1610 may include a sending unit and a receiving unit. The sending unit is used to perform the sending action of the communication device, and the receiving unit is used to perform the receiving action of the communication device. For ease of description, the sending unit and the receiving unit are combined into one transceiver unit in this embodiment. This will be explained uniformly here and will not be repeated later.

当该通信装置是第一装置,示例性地,收发单元1610用于接收第一指示信息等;处理单元1620用于执行第一装置涉及处理、控制等步骤的内容。例如,处理单元1620用于确定第一信道报告等。When the communication device is the first device, exemplarily, the transceiver unit 1610 is used to receive first instruction information, etc.; the processing unit 1620 is used to execute the processing, control, and other steps involved in the first device. For example, the processing unit 1620 is used to determine a first channel report, etc.

当该通信装置是第二装置,示例性地,收发单元1610用于发送第一指示信息等;处理单元1620用于执行第二装置涉及处理、控制等步骤的内容。例如,处理单元1620用于确定第一资源配置等。When the communication device is a second device, exemplarily, the transceiver unit 1610 is used to send first instruction information, etc.; the processing unit 1620 is used to execute the contents of the second device involving processing, control, etc. For example, the processing unit 1620 is used to determine a first resource configuration, etc.

通信装置为第一装置或第二装置时,其将负责执行前述方法实施例中与第一装置或第二装置相关的方法或者步骤中的一项或多项。When the communication device is the first device or the second device, it will be responsible for executing one or more of the methods or steps related to the first device or the second device in the foregoing method embodiments.

可选地,该通信装置还包括存储单元1630,其用于存储用于执行前述方法的程序或者代码。Optionally, the communication device further includes a storage unit 1630 for storing programs or code for performing the aforementioned methods.

图16中的收发单元可以对应于图15中的收发电路,图16中的处理单元可以对应于图15中的处理电路。The transceiver unit in Figure 16 can correspond to the transceiver circuit in Figure 15, and the processing unit in Figure 16 can correspond to the processing circuit in Figure 15.

图15和图16所示的装置实施例是用于实现图6至图14所述的内容。图15和图16所示装置的具体执行步骤与方法可以参见前述方法实施例所述的内容。The apparatus embodiments shown in Figures 15 and 16 are used to implement the contents described in Figures 6 to 14. The specific execution steps and methods of the apparatus shown in Figures 15 and 16 can be found in the foregoing method embodiments.

本申请还提供了一种芯片,包括处理器,用于从存储器中调用并运行所述存储器中存储的指令,使得安装有所述芯片的通信设备执行上述各示例中的方法。该存储器可以集成于所述芯片内,或者,位于所述芯片外。This application also provides a chip, including a processor, for calling and executing instructions stored in a memory, causing a communication device on which the chip is installed to perform the methods described in the examples above. The memory may be integrated within the chip or located externally.

本申请还提供另一种芯片,包括:输入接口、输出接口、处理电路,所述输入接口、输出接口以及所述处理器之间通过内部连接通路相连,所述处理电路用于执行存储器中的代码,当所述代码被执行时,所述处理电路用于执行上述各示例中的方法。可选地,该芯片还包括存储器,该存储器用于存储计算机程序或者代码。其中,输入接口和输出接口可以相互独立,或者,可以集成为输入输出接口。This application also provides another chip, including: an input interface, an output interface, and a processing circuit. The input interface, the output interface, and the processor are connected via an internal connection path. The processing circuit is used to execute code in a memory. When the code is executed, the processing circuit is used to execute the methods in the examples described above. Optionally, the chip also includes a memory for storing computer programs or code. The input interface and the output interface can be independent of each other, or they can be integrated into a single input/output interface.

该处理电路可以为一个或多个处理器中的全部或部分处理电路,或者,一个或多个处理器。The processing circuitry can be all or part of the processing circuitry in one or more processors, or one or more processors.

本申请还提供了一种处理器,用于与存储器耦合,用于执行上述各实施例中任一实施例中涉及第一装置或者第二装置的方法和功能。This application also provides a processor for coupling with a memory for performing the methods and functions involving the first or second apparatus in any of the above embodiments.

在本申请的另一实施例中提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,前述实施例的方法得以实现。In another embodiment of this application, a computer program product containing instructions is provided, which, when run on a computer, enables the implementation of the methods described in the foregoing embodiments.

本申请还提供一种计算机程序,当该计算机程序在计算机中被运行时,前述实施例的方法得以实现。This application also provides a computer program that, when run on a computer, enables the implementation of the methods described in the foregoing embodiments.

在本申请的另一实施例中提供一种计算机可读存储介质,该计算机可读存储介质存储有计算机程序,该计算机程序被计算机执行时实现前述实施例所述的方法。In another embodiment of this application, a computer-readable storage medium is provided, which stores a computer program that, when executed by a computer, implements the methods described in the foregoing embodiments.

应理解,本申请实施例中,处理器可以为中央处理单元(central processing unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that in the embodiments of this application, the processor can be a central processing unit (CPU), but it can also be other general-purpose processors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or any conventional processor.

还应理解,本申请实施例中的存储器可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的随机存取存储器(random access memory,RAM)可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It should also be understood that the memory in the embodiments of this application can be volatile memory or non-volatile memory, or may include both volatile and non-volatile memory. The non-volatile memory can be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), or flash memory. The volatile memory can be random access memory (RAM), which is used as an external cache. By way of example, but not limitation, many forms of random access memory (RAM) are available, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced synchronous DRAM (ESDRAM), synchronous linked DRAM (SLDRAM), and direct rambus RAM (DR RAM). It should be noted that the memory used in the systems and methods described herein is intended to include, but is not limited to, these and any other suitable types of memory.

上述实施例,可以全部或部分地通过软件、硬件、固件或其他任意组合来实现。当使用软件实现时,上述实施例可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令或计算机程序。在计算机上加载或执行计算机指令或计算机程序时,全部或部分地产生按照本申请实施例所述的流程或功能。计算机可以为通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集合的服务器、数据中心等数据存储设备。可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质。半导体介质可以是固态硬盘。The above embodiments can be implemented, in whole or in part, by software, hardware, firmware, or any other combination thereof. When implemented using software, the above embodiments can be implemented, in whole or in part, as a computer program product. A computer program product includes one or more computer instructions or computer programs. When the computer instructions or computer programs are loaded or executed on a computer, all or part of the processes or functions described in the embodiments of this application are generated. The computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. The computer instructions can be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another. For example, the computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that a computer can access or a data storage device such as a server or data center that includes one or more sets of available media. The available medium can be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium. A semiconductor medium can be a solid-state drive.

应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that in the various embodiments of this application, the order of the above-mentioned processes does not imply the order of execution. The execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of the embodiments of this application.

本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性、机械或其它形式。Those skilled in the art will recognize that the units and algorithm steps of the various examples described in conjunction with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are implemented in hardware or software depends on the specific application and design constraints of the technical solution. Those skilled in the art can use different methods to implement the described functions for each specific application, but such implementation should not be considered beyond the scope of this application. Those skilled in the art will clearly understand that, for the sake of convenience and brevity, the specific working processes of the systems, devices, and units described above can be referred to the corresponding processes in the foregoing method embodiments, and will not be repeated here. In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods can be implemented in other ways. For example, the device embodiments described above are merely illustrative; for example, the division of units is merely a logical functional division, and in actual implementation, there may be other division methods. For example, multiple units or components may be combined or integrated into another system, or some features may be ignored or not executed. Furthermore, the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces; the indirect coupling or communication connection of devices or units may be electrical, mechanical, or other forms.

作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。The units described as separate components may or may not be physically separate. The components shown as units may or may not be physical units; that is, they may be located in one place or distributed across multiple network units. Some or all of the units can be selected to achieve the purpose of this embodiment according to actual needs. Furthermore, the functional units in the various embodiments of this application can be integrated into one processing unit, or each unit can exist physically separately, or two or more units can be integrated into one unit. If the above functions are implemented as software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium. Based on this understanding, the technical solution of this application, in essence, or the part that contributes to the prior art, or part of the technical solution, can be embodied in the form of a software product. This computer software product is stored in a storage medium and includes several instructions to cause a computer device (which may be a personal computer, server, or network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of this application. The aforementioned storage medium includes various media capable of storing program code, such as USB flash drives, portable hard drives, read-only memory, random access memory, magnetic disks, or optical disks.

以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。The above description is merely a specific embodiment of this application, but the scope of protection of this application is not limited thereto. Any variations or substitutions that can be easily conceived by those skilled in the art within the scope of the technology disclosed in this application should be included within the scope of protection of this application. Therefore, the scope of protection of this application should be determined by the scope of the claims.

Claims (37)

一种通信方法,其特征在于,包括:A communication method, characterized in that it includes: 接收第一指示信息,所述第一指示信息指示第一资源配置,所述第一资源配置为第一人工智能AI模型支持的至少一个资源配置中的一个,Receive a first instruction message, the first instruction message indicating a first resource configuration, the first resource configuration being one of at least one resource configuration supported by a first artificial intelligence (AI) model. 其中,所述第一AI模型用于对基于所述第一资源配置测量得到的信道信息进行处理。The first AI model is used to process the channel information obtained based on the first resource configuration measurement. 根据权利要求1所述的方法,其特征在于,所述资源配置包括:The method according to claim 1, wherein the resource configuration includes: 配置类型、相邻资源的偏移或者发送次数,Configuration type, offset of adjacent resources, or number of transmissions. 所述配置类型包括以下至少一项:The configuration type includes at least one of the following: 周期配置、半静态配置或者非周期配置。Periodic configuration, semi-static configuration, or non-periodic configuration. 根据权利要求1或2所述的方法,其特征在于,所述第一AI模型用于对基于所述第一资源配置测量得到的信道信息进行处理,还包括:The method according to claim 1 or 2, characterized in that, the first AI model is used to process the channel information obtained based on the first resource configuration measurement, and further includes: 确定第一信道报告,所述第一信道报告是基于所述第一AI模型对所述第一资源配置测量得到的信道信息进行处理确定的。A first channel report is determined, which is based on the channel information obtained by the first AI model from the first resource configuration measurement. 根据权利要求1至3中任一项所述的方法,其特征在于,所述方法还包括:The method according to any one of claims 1 to 3, characterized in that the method further comprises: 发送第一信息,所述第一信息关联所述至少一个资源配置。Send a first message, which is associated with the at least one resource configuration. 根据权利要求4所述的方法,其特征在于,所述第一信息包括所述至少一个资源配置。The method according to claim 4, wherein the first information includes at least one resource configuration. 根据权利要求4所述的方法,其特征在于,所述第一信息包括所述第一AI模型的标识信息。The method according to claim 4, wherein the first information includes the identification information of the first AI model. 根据权利要求6所述的方法,其特征在于,所述方法还包括:The method according to claim 6, characterized in that the method further comprises: 发送第二指示信息,所述第二指示信息指示所述第一AI模型的标识信息和所述至少一个资源配置的对应关系。Send a second instruction message, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration. 根据权利要求7所述的方法,其特征在于,The method according to claim 7, characterized in that, 所述第二指示信息还指示第二AI模型的标识信息和所述第二AI模型支持的至少一个资源配置的对应关系,所述第二AI模型不同于所述第一AI模型。The second instruction information also indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model is different from the first AI model. 根据权利要求7或8所述的方法,其特征在于,所述发送第二指示信息,包括:The method according to claim 7 or 8, wherein sending the second indication information includes: 发送第二信息,所述第二信息包括所述第二指示信息,所述第二信息包括以下任意一项:Send a second message, the second message including the second indication message, the second message including any one of the following: 模型相关信息、注册请求信息、终端设备的能力信息。Model-related information, registration request information, and terminal device capability information. 根据权利要求4至9中任一项所述的方法,其特征在于,所述第一信息包括所述第一AI模型的功能信息,所述功能信息包括以下任意一项:The method according to any one of claims 4 to 9, characterized in that the first information includes functional information of the first AI model, and the functional information includes any one of the following: 时域信道状态信息CSI预测后压缩功能信息、Time-domain channel state information, CSI prediction post-compression function information, 空域和频域CSI压缩功能信息、Spatial and frequency domain CSI compression function information, 空域,频域以及时域CSI压缩功能信息、Spatial, frequency, and time domain CSI compression function information 时域CSI预测功能信息、Time-domain CSI prediction function information 波束时域预测功能信息。Beam temporal prediction function information. 根据权利要求1至10中任一项所述的方法,其特征在于,包括:The method according to any one of claims 1 to 10, characterized in that it comprises: 发送第二信道报告和第三指示信息,所述第三指示信息指示所述第二信道报告关联第三AI模型,所述第二信道报告是基于所述第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。Send a second channel report and a third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model. 根据权利要求11所述的方法,其特征在于,所述方法还包括:The method according to claim 11, characterized in that the method further comprises: 接收第四指示信息,所述第四指示信息指示所述第二信道报告的配置信息,所述第二信道报告的配置信息包括所述第二信道报告的最大反馈开销。Receive a fourth indication message, which indicates the configuration information of the second channel report, including the maximum feedback overhead of the second channel report. 根据权利要求11或12所述的方法,其特征在于,所述方法还包括:The method according to claim 11 or 12, characterized in that the method further comprises: 根据所述第二资源配置确定所述第二信道报告的信息。The information in the second channel report is determined based on the second resource configuration. 根据权利要求11至13中任一项所述的方法,其特征在于,所述第二信道报告的信息包括以下至少一项:The method according to any one of claims 11 to 13, characterized in that the information reported by the second channel includes at least one of the following: 一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources 一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources 一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources 一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources 一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources 一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources. 一种通信方法,其特征在于,包括:A communication method, characterized in that it includes: 确定第一资源配置,所述第一资源配置为第一AI模型支持的至少一个资源配置中的一个,所述第一AI模型用于对基于所述第一资源配置测量得到的信道信息进行处理;A first resource configuration is determined, wherein the first resource configuration is one of at least one resource configuration supported by a first AI model, and the first AI model is used to process channel information measured based on the first resource configuration; 发送第一指示信息,所述第一指示信息指示所述第一资源配置。Send a first instruction message, which indicates the first resource configuration. 根据权利要求15所述的方法,其特征在于,所述资源配置包括:The method according to claim 15, wherein the resource configuration includes: 配置类型、相邻资源的偏移或者发送次数,Configuration type, offset of adjacent resources, or number of transmissions. 所述配置类型包括以下至少一项:The configuration type includes at least one of the following: 周期配置、半静态配置或者非周期配置。Periodic configuration, semi-static configuration, or non-periodic configuration. 根据权利要求15或16所述的方法,其特征在于,所述方法还包括:The method according to claim 15 or 16, characterized in that the method further comprises: 获取第一信息,所述第一信息关联所述至少一个资源配置。Obtain first information, which is associated with at least one resource configuration. 根据权利要求17所述的方法,其特征在于,所述第一信息包括所述至少一个资源配置。The method according to claim 17, wherein the first information includes the at least one resource configuration. 根据权利要求17所述的方法,其特征在于,所述第一信息包括所述第一AI模型的标识信息。The method according to claim 17, wherein the first information includes the identification information of the first AI model. 根据权利要求19所述的方法,其特征在于,所述方法还包括:The method according to claim 19, characterized in that the method further comprises: 获取第二指示信息,所述第二指示信息指示所述第一AI模型的标识信息和所述至少一个资源配置的对应关系。Obtain second indication information, which indicates the correspondence between the identification information of the first AI model and the at least one resource configuration. 根据权利要求20所述的方法,其特征在于,The method according to claim 20, characterized in that, 所述第二指示信息还指示第二AI模型的标识信息和所述第二AI模型支持的至少一个资源配置的对应关系,所述第二AI模型不同于所述第一AI模型。The second instruction information also indicates the correspondence between the identification information of the second AI model and at least one resource configuration supported by the second AI model, wherein the second AI model is different from the first AI model. 根据权利要求20或21所述的方法,其特征在于,所述接收第二指示信息,包括:The method according to claim 20 or 21, wherein receiving the second indication information includes: 获取第二信息,所述第二信息包括所述第二指示信息,所述第二信息包括以下任意一项:Obtain second information, the second information including the second indication information, the second information including any one of the following: 模型相关信息、注册请求信息、终端设备的能力信息。Model-related information, registration request information, and terminal device capability information. 根据权利要求17至22中任一项所述的方法,其特征在于,所述第一信息包括所述第一AI模型的功能信息,所述功能信息包括以下任意一项:The method according to any one of claims 17 to 22, characterized in that the first information includes functional information of the first AI model, and the functional information includes any one of the following: 时域CSI预测后压缩功能信息、Information on time-domain CSI prediction followed by compression function 空域和频域CSI压缩功能信息、Spatial and frequency domain CSI compression function information, 空域,频域以及时域CSI压缩功能信息、Spatial, frequency, and time domain CSI compression function information 时域CSI预测功能信息、Time-domain CSI prediction function information 波束时域预测功能信息。Beam temporal prediction function information. 根据权利要求15至23中任一项所述的方法,其特征在于,包括:The method according to any one of claims 15 to 23, characterized in that it comprises: 获取第二信道报告和第三指示信息,所述第三指示信息指示所述第二信道报告关联第三AI模型,所述第二信道报告是基于所述第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。Obtain a second channel report and a third indication information. The third indication information indicates that the second channel report is associated with a third AI model. The second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model. 根据权利要求24所述的方法,其特征在于,所述方法还包括:The method according to claim 24, characterized in that the method further comprises: 发送第四指示信息,所述第四指示信息指示所述第二信道报告的配置信息,所述第二信道报告的配置信息包括所述第二信道报告的最大反馈开销。Send a fourth indication message, which indicates the configuration information of the second channel report, including the maximum feedback overhead of the second channel report. 根据权利要求24或25所述的方法,其特征在于,所述第二信道报告的信息包括以下至少一项:The method according to claim 24 or 25, wherein the information in the second channel report includes at least one of the following: 一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources 一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources 一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources 一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources 一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources 一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources. 一种通信方法,其特征在于,包括:A communication method, characterized in that it includes: 发送第二信道报告和第三指示信息,所述第三指示信息指示所述第二信道报告关联第三AI模型,所述第二信道报告是基于所述第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。Send a second channel report and a third indication information, wherein the third indication information indicates that the second channel report is associated with a third AI model, and the second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model. 根据权利要求27所述的方法,其特征在于,所述方法还包括:The method according to claim 27, characterized in that the method further comprises: 接收第四指示信息,所述第四指示信息指示所述第二信道报告的配置信息,所述第二信道报告的配置信息包括所述第二信道报告的最大反馈开销。Receive a fourth indication message, which indicates the configuration information of the second channel report, including the maximum feedback overhead of the second channel report. 根据权利要求27或28所述的方法,其特征在于,所述方法还包括:The method according to claim 27 or 28, characterized in that the method further comprises: 根据所述第二资源配置确定所述第二信道报告的信息。The information in the second channel report is determined based on the second resource configuration. 根据权利要求27至29中任一项所述的方法,其特征在于,所述第二信道报告的信息包括以下至少一项:The method according to any one of claims 27 to 29, characterized in that the information reported by the second channel includes at least one of the following: 一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources 一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources 一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources 一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources 一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources 一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources. 一种通信方法,其特征在于,包括:A communication method, characterized in that it includes: 获取第二信道报告和第三指示信息,所述第三指示信息指示所述第二信道报告关联第三AI模型,所述第二信道报告是基于所述第三AI模型对第二资源配置测量得到的信道信息进行处理生成的。Obtain a second channel report and a third indication information. The third indication information indicates that the second channel report is associated with a third AI model. The second channel report is generated by processing the channel information obtained from the second resource configuration measurement based on the third AI model. 根据权利要求31所述的方法,其特征在于,所述方法还包括:The method according to claim 31, characterized in that the method further comprises: 发送第四指示信息,所述第四指示信息指示所述第二信道报告的配置信息,所述第二信道报告的配置信息包括所述第二信道报告的最大反馈开销。Send a fourth indication message, which indicates the configuration information of the second channel report, including the maximum feedback overhead of the second channel report. 根据权利要求31或32所述的方法,其特征在于,所述第二信道报告的信息包括以下至少一项:The method according to claim 31 or 32, wherein the information reported by the second channel includes at least one of the following: 一个或多个测量资源对应的资源个数、The number of resources corresponding to one or more measurement resources 一个或多个测量资源对应的测量结果、Measurement results corresponding to one or more measurement resources 一个或多个测量资源对应的时域信息、Temporal information corresponding to one or more measurement resources 一个或多个测量资源对应的频域信息、Frequency domain information corresponding to one or more measurement resources 一个或多个测量资源对应的空域信息、Spatial information corresponding to one or more measurement resources 一个或多个测量资源中每个资源信息对应的比特个数。The number of bits corresponding to each resource information in one or more measurement resources. 一种通信装置,其特征在于,包括用于实现权利要求1至14中任一项所述的方法的模块或者单元或用于实现权利要求15至26中任一项所述的方法的模块或者单元或用于实现权利要求27至30中任一项所述的方法的模块或者单元或用于实现权利要求31至33中任一项所述的方法的模块或者单元。A communication device, characterized in that it includes a module or unit for implementing the method of any one of claims 1 to 14, or a module or unit for implementing the method of any one of claims 15 to 26, or a module or unit for implementing the method of any one of claims 27 to 30, or a module or unit for implementing the method of any one of claims 31 to 33. 根据权利要求34所述的通信装置,其特征在于,The communication device according to claim 34 is characterized in that, 在所述通信装置包括用于实现1至14中任一项所述的方法的模块或者单元的情况下或用于实现权利要求27至30中任一项所述的方法的模块或者单元的情况下,所述通信装置为终端设备,或者,When the communication device includes a module or unit for implementing the method of any one of claims 1 to 14, or a module or unit for implementing the method of any one of claims 27 to 30, the communication device is a terminal device, or... 所述通信装置为用于终端设备的芯片,或者,The communication device is a chip used in terminal equipment, or... 所述通信装置为终端设备侧的服务器,或者;The communication device is a server on the terminal device side, or; 在所述通信装置包括用于实现15至26中任一项所述的方法的模块或者单元的情况下或用于实现权利要求31至33中任一项所述的方法的模块或者单元的情况下,所述通信装置为网络设备,或者,When the communication device includes a module or unit for implementing the method of any one of claims 15 to 26, or a module or unit for implementing the method of any one of claims 31 to 33, the communication device is a network device, or... 所述通信装置为用于网络设备的芯片,或者,The communication device is a chip used in network equipment, or... 所述通信装置为网络设备侧的服务器。The communication device is a server on the network equipment side. 一种通信系统,其特征在于,包括用于执行如权利要求1至14中任一项所述的方法的装置和用于实现权利要求15至26中任一项所述的方法的装置,或,用于实现如权利要求27至30中任一项所述的方法的装置和用于实现如权利要求31至33中任一项所述的方法的装置。A communication system, characterized in that it includes means for performing the method as described in any one of claims 1 to 14 and means for implementing the method as described in any one of claims 15 to 26, or means for implementing the method as described in any one of claims 27 to 30 and means for implementing the method as described in any one of claims 31 to 33. 一种可读存储介质,其特征在于,用于存储指令,当所述指令被执行时,如权利要求1至33中任一项所述的方法被执行。A readable storage medium, characterized in that it is used to store instructions, when the instructions are executed, the method as described in any one of claims 1 to 33 is performed.
PCT/CN2025/093323 2024-05-10 2025-05-08 Communication method and communication apparatus Pending WO2025232813A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202410579828.0A CN120934711A (en) 2024-05-10 2024-05-10 Communication method and communication device
CN202410579828.0 2024-05-10

Publications (1)

Publication Number Publication Date
WO2025232813A1 true WO2025232813A1 (en) 2025-11-13

Family

ID=97581060

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2025/093323 Pending WO2025232813A1 (en) 2024-05-10 2025-05-08 Communication method and communication apparatus

Country Status (2)

Country Link
CN (1) CN120934711A (en)
WO (1) WO2025232813A1 (en)

Also Published As

Publication number Publication date
CN120934711A (en) 2025-11-11

Similar Documents

Publication Publication Date Title
US20250141525A1 (en) Communication method and apparatus
WO2023006096A1 (en) Communication method and apparatus
WO2024208296A1 (en) Communication method and communication apparatus
WO2024169757A1 (en) Communication method and communication apparatus
WO2025232813A1 (en) Communication method and communication apparatus
WO2025167701A1 (en) Communication method and communication apparatus
WO2025232639A1 (en) Artificial intelligence model monitoring method and communication device
WO2025209331A1 (en) Information transmission method, apparatus, and system
WO2025209305A9 (en) Information transmission method, apparatus and system
WO2025218595A1 (en) Communication method and apparatus
US20250202560A1 (en) Communication method and apparatus
WO2025185425A1 (en) Wireless model, information processing method and device, and system
US20250317254A1 (en) Communication method and communication apparatus
WO2025067480A1 (en) Communication method, apparatus and system
WO2025140003A1 (en) Communication method and communication apparatus
WO2025140663A1 (en) Model data acquisition method, apparatus and system
WO2025209433A1 (en) Communication method and apparatus
WO2025167989A1 (en) Communication method and communication apparatus
WO2025092630A1 (en) Communication method and communication apparatus
WO2025161853A1 (en) Model monitoring method, apparatus and system
WO2025241999A1 (en) Communication method and communication apparatus
WO2025139843A1 (en) Communication method and communication apparatus
WO2025209513A1 (en) Data collection method and related apparatus
WO2025161598A1 (en) Network training method, and communication apparatus
WO2025209317A1 (en) Channel feedback information transmission method and apparatus