[go: up one dir, main page]

WO2025129531A1 - Procédé de communication, premier élément de réseau, second élément de réseau et système de communication - Google Patents

Procédé de communication, premier élément de réseau, second élément de réseau et système de communication Download PDF

Info

Publication number
WO2025129531A1
WO2025129531A1 PCT/CN2023/140443 CN2023140443W WO2025129531A1 WO 2025129531 A1 WO2025129531 A1 WO 2025129531A1 CN 2023140443 W CN2023140443 W CN 2023140443W WO 2025129531 A1 WO2025129531 A1 WO 2025129531A1
Authority
WO
WIPO (PCT)
Prior art keywords
function
network element
terminal
network device
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/140443
Other languages
English (en)
Chinese (zh)
Inventor
李小龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to PCT/CN2023/140443 priority Critical patent/WO2025129531A1/fr
Priority to CN202380097985.1A priority patent/CN121080003A/zh
Publication of WO2025129531A1 publication Critical patent/WO2025129531A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/02Processing of mobility data, e.g. registration information at HLR [Home Location Register] or VLR [Visitor Location Register]; Transfer of mobility data, e.g. between HLR, VLR or external networks
    • H04W8/08Mobility data transfer
    • H04W8/14Mobility data transfer between corresponding nodes

Definitions

  • the present disclosure relates to the field of communication technology, and in particular to a communication method, a first network element, a second network element, and a communication system.
  • AI Artificial Intelligence
  • the embodiments of the present disclosure provide a communication method, a first network element, a second network element, and a communication system.
  • a communication method comprising: a first network element sends first information to a second network element, the first information comprising artificial intelligence (AI) function information running on a terminal.
  • AI artificial intelligence
  • a communication method comprising: a second network element receives first information, the first information is sent by a first network element, and the first information includes artificial intelligence (AI) function information running on a terminal.
  • AI artificial intelligence
  • a communication method comprising: a first network element sends first information to a second network element, wherein the first information includes artificial intelligence (AI) function information running on a terminal; and the second network element receives the first information.
  • AI artificial intelligence
  • a first network element comprising: a transceiver module, configured to send first information to a second network element, wherein the first information comprises artificial intelligence (AI) function information running on a terminal.
  • AI artificial intelligence
  • a second network element comprising: a transceiver module, configured to receive first information, wherein the first information is sent by the first network element, and the first information includes artificial intelligence (AI) function information running on the terminal.
  • AI artificial intelligence
  • a first network element comprising: one or more processors; wherein the processor is used to execute the communication method of the first aspect.
  • a second network element comprising: one or more processors; wherein the processor is used to execute the communication method of the second aspect.
  • a communication system including a terminal and a network device, wherein the terminal is configured to implement the communication method of the first aspect, and the network device is configured to implement the communication method of the second aspect.
  • a storage medium stores instructions, and when the instructions are executed on a communication device, the communication device executes the communication method of any one of the first aspect and the second aspect.
  • FIG1 is a schematic diagram of the architecture of a communication system according to an embodiment of the present disclosure.
  • FIG. 2A is an interactive schematic diagram of a communication method according to an embodiment of the present disclosure.
  • FIG2B is an interactive schematic diagram of a communication method corresponding to situation -A) according to an embodiment of the present disclosure.
  • FIG2C is an interactive schematic diagram of a communication method corresponding to situation -B) according to an embodiment of the present disclosure.
  • Figure 2D is an interactive schematic diagram of the communication method corresponding to situation -C) according to an embodiment of the present disclosure.
  • Figure 2E is an interactive schematic diagram of a communication method corresponding to situation -D) according to an embodiment of the present disclosure.
  • FIG3 is a flow chart of a communication method according to an embodiment of the present disclosure.
  • FIG4A is a flow chart of a communication method according to an embodiment of the present disclosure.
  • FIG4B is a flow chart of a communication method according to an embodiment of the present disclosure.
  • FIG. 5 is an interactive schematic diagram of a communication method according to an embodiment of the present disclosure.
  • FIG6A is a schematic diagram of the structure of a first network element proposed in an embodiment of the present disclosure.
  • FIG6B is a schematic diagram of the structure of the second network element proposed in an embodiment of the present disclosure.
  • FIG. 7A is a schematic diagram of the structure of a communication device proposed in an embodiment of the present disclosure.
  • FIG. 7B is a schematic diagram of the structure of a chip proposed in an embodiment of the present disclosure.
  • the embodiments of the present disclosure provide a communication method, a first network element, a second network element, and a communication system.
  • an embodiment of the present disclosure proposes that a first network element sends first information to a second network element, wherein the first information includes artificial intelligence AI function information running on the terminal.
  • the first information is sent to the second network element through the first network element
  • the AI function information of the terminal operation is sent to the second network element, so that the second network element can coordinate the AI model or AI function based on the first information, thereby ensuring the stability of the communication process.
  • the AI function information is configured by a third network element.
  • the first network element is the terminal, the second network element is a core network device, and the third network element is an access network device.
  • the terminal reports the AI function information configured by the access network device to the core network device.
  • the first information includes at least one of the following: an AI function for channel state information CSI prediction configured by the third network element; an AI function for channel state information CSI compression configured by the third network element; an AI function for beam management configured by the third network element; a first AI function configured by the third network element, the first AI function being an AI function configured by the third network element, except the AI function for CSI prediction, the AI function for CSI compression, and the AI function for beam management; the running time of the AI function; the AI function The start time at which the AI function can be run; the end time of the AI function run; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the first AI function configured by the third network element; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that can be run simultaneously by the first network element; the number of
  • the first network element is the terminal
  • the second network element is the core network device
  • the third network element is the access network device
  • the AI function information is configured by the third network element.
  • the specific content of the first information is thus realized.
  • the AI function reporting in this case is realized, and the core network device can coordinate the AI function or AI model based on the reported AI function to ensure the stability of communication.
  • the first information is carried by at least one of the following: a long-term positioning protocol LPP; a sidelink positioning protocol SLPP.
  • the first network element is the terminal, the second network element is an access network device, and the third network element is a core network device.
  • the terminal reports the AI function information configured by the core network device to the access network device.
  • the first information includes at least one of the following: an AI function for positioning configured by the third network element; a second AI function, wherein the second AI function is an AI function among the AI functions configured by the third network element, except the AI function for positioning; the running time of the AI function; the start time of the AI function running; the end time of the AI function running; the computing power that can be used for AI positioning configured by the third network element; the computing power that can be used for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that can be run simultaneously by the first network element; the number of AI models configured by the second network element that can be run simultaneously by the first network element; the AI functions configured by the second network element that can be run simultaneously by the first network element; the AI models configured by the second network element that can be run simultaneously by the first network
  • the first network element is the terminal
  • the second network element is the access network device
  • the third network element is the core network device
  • the AI function information is configured by the third network element.
  • the specific content of the first information is thus realized.
  • the AI function reporting in this case is realized, and the access network device can coordinate the AI function or AI model based on the reported AI function to ensure the stability of communication.
  • the first information is carried in radio resource control RRC signaling.
  • the first network element is an access network device, and the second network element is a core network device.
  • the access network device reports to the core network device the information about the AI function configured by the access network device for the terminal.
  • the first information includes at least one of the following: an AI function for CSI prediction configured by the access network device for the terminal; an AI function for CSI compression configured by the access network device for the terminal; an AI function for beam management configured by the access network device for the terminal; a third AI function, wherein the third function is an AI function configured for the terminal among the AI functions configured by the access network device, except the AI function for CSI prediction, the AI function for CSI compression and the AI function for beam management; the computing capacity of the terminal that can be used for the AI function configured by the core network device for AI positioning; the computing capacity of the terminal that can be used for the AI function configured by the core network device; the computing capacity of the terminal for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; the end time of the running of the AI function of the terminal.
  • the access network device when the access network device reports the AI function information configured by the access network device for the terminal to the core network device, the specific content of the first information is included.
  • the AI function reporting in this case is realized, and then the core network device can coordinate the AI function or AI model based on the reported AI function to ensure the stability of communication.
  • the first network element is a core network device, and the second network element is an access network device.
  • the core network device reports to the access network device the information about the AI function configured by the core network device for the terminal.
  • the first information includes at least one of the following: an AI function for positioning configured by the core network device for the terminal; a fourth AI function, the fourth AI function being an AI function configured by the core network device for the terminal, except the AI function for positioning; the computing power of the terminal for positioning; the computing power of the terminal for the AI function configured by the core network device; the computing power that the terminal can use for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; and the end time of the running of the AI function of the terminal.
  • the core network device when the core network device reports the AI function information configured by the core network device for the terminal to the access network device, the specific content of the first information is included.
  • the AI function reporting in this case is realized, and then the access network device can coordinate the AI function or AI model based on the reported AI function to ensure the stability of communication.
  • an embodiment of the present disclosure proposes a communication method, including: a second network element receives first information, the first information is sent by a first network element, and the first information includes artificial intelligence AI function information running on a terminal.
  • the AI function information is configured by a third network element.
  • the first network element is the terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the first information includes at least one of the following: an AI function for channel state information CSI prediction configured by the third network element; an AI function for channel state information CSI compression configured by the third network element; an AI function for beam management configured by the third network element; a first AI function configured by the third network element, the first AI function being an AI function configured by the third network element, except the AI function for CSI prediction, the AI function for CSI compression, and the AI function for beam management; the running time of the AI function; the AI function The start time at which the AI function can be run; the end time of the AI function run; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the first AI function configured by the third network element; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that can be run simultaneously by the first network element; the number of
  • the first information is carried by at least one of the following: a long-term positioning protocol LPP; a sidelink positioning protocol SLPP.
  • the first network element is the terminal
  • the second network element is an access network device
  • the third network element is a core network device.
  • the first information includes at least one of the following: an AI function for positioning configured by the third network element; a second AI function, wherein the second AI function is an AI function other than the AI function for positioning in the AI function configured by the third network element; the running time of the AI function; the starting time of the running of the AI function; the ending time of the running of the AI function; the computing power that can be used for the AI positioning configured by the third network element; the computing power that can be used for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the AI function configured by the second network element; the first network element can simultaneously The number of AI functions configured in the second network element that are running; the number of AI models configured in the second network element that can be run simultaneously by the first network element; the AI functions configured in the second network element that can be run simultaneously by the first network element
  • the first network element is an access network device
  • the second network element is a core network device
  • the first information includes at least one of the following:
  • an AI function for CSI prediction configured by the access network device for the terminal; an AI function for CSI compression configured by the access network device for the terminal; an AI function for beam management configured by the access network device for the terminal; a third function, wherein the third function is an AI function configured for the terminal, except the AI function for CSI prediction, the AI function for CSI compression and the AI function for beam management among the AI functions configured by the access network device; a computing capability of the terminal that can be used for the AI function configured by the core network device for AI positioning; a computing capability of the terminal that can be used for the AI function configured by the core network device; a computing capability of the terminal that is used for the AI function configured by the access network device; an operating time of the AI function of the terminal; a start time of the operation of the AI function of the terminal; and an end time of the operation of the AI function of the terminal.
  • the first network element is a core network device, and the second network element is an access network device.
  • an embodiment of the present disclosure proposes a first network element, comprising: a transceiver module, used to send first information to a second network element, wherein the first information includes artificial intelligence AI function information running on the terminal.
  • the AI function information is configured by a third network element.
  • the first network element is the terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the first network element is the terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the first information is carried by at least one of the following: a long-term positioning protocol LPP; a sidelink positioning protocol SLPP.
  • the first network element is the terminal, the second network element is an access network device, and the third network element is a core network device.
  • the first network element is the terminal, the second network element is a core network device, and the third network element is an access network device.
  • the first information includes at least one of the following: an AI function for positioning configured by the third network element; a second AI function, wherein the second AI function is an AI function other than the AI function for positioning in the AI function configured by the third network element; the running time of the AI function; the start time of the AI function running; the end time of the AI function running; the computing power for AI positioning configured by the third network element; the computing power for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management computing power; computing power that can be used for the AI function configured in the second network element; the number of AI functions configured in the second network element that can be run simultaneously by the first network element; the number of AI models configured in the second network element that can be run simultaneously by the first network element; the AI functions configured in the second network element that can be run simultaneously by the first network element; the AI models configured in the second network element that can be run simultaneously by the first network element; the AI models configured in
  • the first information is carried in a radio resource control RRC signaling
  • the first network element is an access network device
  • the second network element is a core network device
  • the first information includes at least one of the following: an AI function for CSI prediction configured by the access network device for the terminal; an AI function for CSI compression configured by the access network device for the terminal; an AI function for beam management configured by the access network device for the terminal; a third AI function, wherein the third function is an AI function configured for the terminal among the AI functions configured by the access network device, except the AI function for CSI prediction, the AI function for CSI compression and the AI function for beam management; the computing capability of the terminal that can be used for the AI function configured by the core network device for AI positioning; the computing capability of the terminal that can be used for the AI function configured by the core network device; the computing capability of the terminal for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; the end time of the running of the AI function of the terminal.
  • the first network element is a core network device, and the second network element is an access network device.
  • the AI function information is configured by a third network element.
  • the first network element is the terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the first information includes at least one of the following: an AI function for channel state information CSI prediction configured by the third network element; an AI function for channel state information CSI compression configured by the third network element; an AI function for beam management configured by the third network element; a first AI function configured by the third network element, the first AI function being an AI function configured by the third network element, except the AI function for CSI prediction, the AI function for CSI compression, and the AI function for beam management; the running time of the AI function; the AI function The start time at which the AI function can be run; the end time of the AI function run; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the first AI function configured by the third network element; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that can be run simultaneously by the first network element; the number of
  • the first information is carried by at least one of the following: a long-term positioning protocol LPP; a side link positioning protocol SLPP.
  • the first network element is an access network device, and the second network element is a core network device.
  • the first information includes at least one of the following: the AI function for positioning configured by the third network element; the second AI function, the second AI function being the AI function configured by the third network element other than the AI function for positioning; the running time of the AI function; the start time of the AI function running; the end time of the AI function running; the computing power that can be used for AI positioning configured by the third network element; the computing power that can be used for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that can be run simultaneously by the first network element; the number of AI models configured by the second network element that can be run simultaneously by the first network element; the AI functions configured by the second network element that can be run simultaneously by the first network element; the AI models configured by the second network element that can be run simultaneously by the first network element; the AI models configured by
  • the first network element is a core network device, and the second network element is an access network device.
  • the first information includes at least one of the following: an AI function for positioning configured by the core network device for the terminal; a fourth AI function, the fourth AI function being the AI function configured by the core network device for the terminal, except the AI function for positioning; the computing power of the terminal for positioning; the computing power of the terminal for the AI function configured by the core network device; the computing power that the terminal can use for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; and the end time of the running of the AI function of the terminal.
  • an embodiment of the present disclosure proposes a first network element, comprising: one or more processors; wherein the processor is used to execute the communication method of the first aspect.
  • the present disclosure implements a second network element, comprising: one or more processors; wherein the processor is used to execute the communication method of the second aspect.
  • the embodiments of the present disclosure provide a communication method, a first network element, a second network element, and a communication system.
  • the terms communication method, data processing method, information processing method, etc. can be replaced with each other
  • the terms data processing device, information processing device, communication device, etc. can be replaced with each other
  • the terms information processing system, communication system, etc. can be replaced with each other.
  • elements expressed in the singular form such as “a”, “an”, “the”, “above”, “said”, “aforementioned”, “this”, etc., may mean “one and only one", or “one or more”, “at least one”, etc.
  • the noun after the article may be understood as a singular expression or a plural expression.
  • the recording method of "A or B” may include the following technical solutions according to the situation: in some embodiments, A (A is executed independently of B); in some embodiments, B (B is executed independently of A); in some embodiments, execution is selected from A and B (A and B are selectively executed).
  • A A is executed independently of B
  • B B is executed independently of A
  • execution is selected from A and B (A and B are selectively executed).
  • the description object is a "level”
  • the ordinal number before the "level” in the “first level” and the “second level” does not limit the priority between the "levels”.
  • the number of description objects is not limited by ordinal numbers and can be one or more. Taking the "first device” as an example, the number of "devices” can be one.
  • the objects modified by different prefixes can be the same or different. For example, if the object described is “device”, then “the first device” and “the second device” can be the same device or different devices, and their types can be the same or different. For another example, if the object described is “information”, then “the first information” and “the second information” can be the same information or different information, and their contents can be the same or different.
  • “including A”, “comprising A”, “used to indicate A”, and “carrying A” can be interpreted as directly carrying A or indirectly indicating A.
  • time/frequency refers to the time domain and/or the frequency domain.
  • terms such as “greater than”, “greater than or equal to”, “not less than”, “more than”, “more than or equal to”, “not less than”, “higher than”, “higher than or equal to”, “not lower than”, and “above” can be replaced with each other, and terms such as “less than”, “less than or equal to”, “not greater than”, “less than”, “less than or equal to”, “no more than”, “lower than”, “lower than or equal to”, “not higher than”, and “below” can be replaced with each other.
  • devices, etc. can be interpreted as physical or virtual, and their names are not limited to the names recorded in the embodiments.
  • Terms such as “device”, “equipment”, “device”, “circuit”, “network element”, “node”, “function”, “unit”, “section”, “system”, “network”, “chip”, “chip system”, “entity”, and “subject” can be used interchangeably.
  • network may be interpreted as devices included in the network (eg, access network equipment, core network equipment, etc.).
  • the terminal may be replaced by an access network device, a core network device, or a network device.
  • the access network device, the core network device, or the network device may also be configured to have a structure that has all or part of the functions of the terminal.
  • data, information, etc. may be obtained with the user's consent.
  • each element, each row, or each column in the table of the embodiments of the present disclosure may be implemented as an independent embodiment, and the combination of any elements, any rows, and any columns may also be implemented as an independent embodiment.
  • FIG1 is a schematic diagram of the architecture of a communication system according to an embodiment of the present disclosure.
  • a communication system 100 includes a first network element (node) 101 and a second network element (node) 102 .
  • the first network element 101 may be, for example, any one of a terminal, a core network device, or an access network device.
  • the second network element 102 may be, for example, any one of a terminal, a core network device, or an access network device.
  • the communication system 100 may further include a third network element 103, and the third network element 103 is used to configure AI function information.
  • the third network element 103 in the communication system 100 may be omitted.
  • the terminal includes, for example, a mobile phone, a wearable device, an IoT device, a mobile phone with communication function
  • the present invention relates to at least one of a car with wireless communication function, a smart car, a tablet computer (Pad), a computer with wireless transceiver function, a virtual reality (VR) terminal device, an augmented reality (AR) terminal device, a wireless terminal device in industrial control, a wireless terminal device in self-driving, a wireless terminal device in remote medical surgery, a wireless terminal device in a smart grid, a wireless terminal device in transportation safety, a wireless terminal device in a smart city, and a wireless terminal device in a smart home, but is not limited thereto.
  • the access network device is, for example, a node or device that accesses a terminal to a wireless network.
  • the access network device may include an evolved Node B (eNB), a next generation evolved Node B (ng-eNB), a next generation Node B (gNB), a node B (NB), a home node B (HNB), a home evolved node B (HeNB), a wireless backhaul device, a radio network controller (RNC), a base station controller (BSC), a base transceiver station (BTS), a base band unit (BBU), a mobile switching center, a base station in a 6G communication system, an open base station (Open RAN), a cloud base station (Cloud RAN), a base station in other communication systems, and at least one of an access node in a Wi-Fi system, but is not limited thereto.
  • eNB evolved Node B
  • ng-eNB next generation evolved Node B
  • gNB next generation Node B
  • NB node
  • the technical solution of the present disclosure may be applicable to the Open RAN architecture.
  • the interfaces between access network devices or within access network devices involved in the embodiments of the present disclosure may become internal interfaces of Open RAN, and the processes and information interactions between these internal interfaces may be implemented through software or programs.
  • the access network device may be composed of a centralized unit (central unit, CU) and a distributed unit (distributed unit, DU), wherein the CU may also be called a control unit (control unit).
  • the CU-DU structure may be used to split the protocol layer of the access network device, with some functions of the protocol layer being centrally controlled by the CU, and the remaining part or all of the functions of the protocol layer being distributed in the DU, and the DU being centrally controlled by the CU, but not limited to this.
  • the core network device may be a device including a first network element, a second network element, etc., or may be a plurality of devices or a group of devices including all or part of the first network element, the second network element, etc.
  • the network element may be virtual or physical.
  • the core network may include, for example, at least one of an Evolved Packet Core (EPC), a 5G Core Network (5GCN), and a Next Generation Core (NGC).
  • EPC Evolved Packet Core
  • 5GCN 5G Core Network
  • NGC Next Generation Core
  • the communication system described in the embodiment of the present disclosure is for the purpose of more clearly illustrating the technical solution of the embodiment of the present disclosure, and does not constitute a limitation on the technical solution proposed in the embodiment of the present disclosure.
  • a person of ordinary skill in the art can know that with the evolution of the system architecture and the emergence of new business scenarios, the technical solution proposed in the embodiment of the present disclosure is also applicable to similar technical problems.
  • the following embodiments of the present disclosure may be applied to the communication system 100 shown in FIG1 , or part of the subject, but are not limited thereto.
  • the subjects shown in FIG1 are examples, and the communication system may include all or part of the subjects in FIG1 , or may include other subjects other than FIG1 , and the number and form of the subjects are arbitrary, and the subjects may be physical or virtual, and the connection relationship between the subjects is an example, and the subjects may be connected or disconnected, and the connection may be in any manner, and may be a direct connection or an indirect connection, and may be a wired connection or a wireless connection.
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • LTE-B LTE-Beyond
  • SUPER 3G IMT-Advanced
  • 4G the fourth generation mobile communication system
  • 5G 5G new radio
  • FAA Future Radio Access
  • RAT New Radio
  • NR New Radio
  • NX New radio access
  • the present invention relates to wireless communication systems such as LTE, Wi-Fi (X), Global System for Mobile communications (GSM (registered trademark)), CDMA2000, Ultra Mobile Broadband (UMB), IEEE 802.11 (Wi-Fi (registered trademark)), IEEE 802.16 (WiMAX (registered trademark)), IEEE 802.20, Ultra-WideBand (UWB), Bluetooth (registered trademark), Public Land Mobile Network (PLMN) network, Device to Device (D2D) system, Machine to Machine (M2M) system, Internet of Things (IoT) system, Vehicle to Everything (V2X), systems using other communication methods, and next-generation systems expanded based on them.
  • PLMN Public Land Mobile Network
  • D2D Device to Device
  • M2M Machine to Machine
  • IoT Internet of Things
  • V2X Vehicle to Everything
  • systems using other communication methods and next-generation systems expanded based on them.
  • next-generation systems expanded based on them.
  • a combination of multiple systems for example, a combination of
  • AI artificial intelligence
  • 5G New Radio (NR) communication scenarios or even in 6G communication scenarios.
  • NR 5G New Radio
  • 6G communication scenarios positioning based on AI or machine learning (ML) can be achieved. That is, the function of AI is applied to the positioning of the terminal, and the direct or indirect positioning of the terminal is achieved through AI technology.
  • ML machine learning
  • AI models or AI functions are managed by access network equipment.
  • the access network equipment determines to use AI or ML-based beam management, and the access network equipment determines to estimate and predict the channel state information (CSI) based on AI or ML.
  • CSI channel state information
  • AI-based positioning functions are managed by core network equipment.
  • AI-based positioning functions can be managed by Location Management Function (LMF).
  • LMF Location Management Function
  • the terminal may use different AI models from different nodes to communicate in the same time period.
  • the terminal uses the AI-based positioning function configured by the core network equipment and the AI-based CSI prediction function of the access network equipment at the same time.
  • the AI model or AI function used by the terminal needs to be coordinated.
  • FIG2A is an interactive schematic diagram of a communication method according to an embodiment of the present disclosure. As shown in FIG2A , the embodiment of the present disclosure relates to a communication method, and the method includes:
  • obtain can be interchangeable, and can be interpreted as receiving from other entities, obtaining from protocols, obtaining from high levels, obtaining by self-processing, autonomous implementation, etc.
  • the names of information, etc. are not limited to the names recorded in the embodiments, and terms such as “information”, “message”, “signal”, “signaling”, “report”, “configuration”, “indication”, “instruction”, “command”, “channel”, “parameter”, “domain”, “field”, “symbol”, “symbol”, “code element”, “codebook”, “codeword”, “codepoint”, “bit”, “data”, “program”, and “chip” can be used interchangeably.
  • the first network element 101 may be any one of the following: a terminal, an access network device, or a core network device.
  • the second network element 102 may be any one of the following: a terminal, an access network device, or a core network device.
  • AI model artificial intelligence model
  • machine learning (Machine Learning, ML) model machine learning (Machine Learning, ML) model
  • ML model AI function, and ML function
  • the AI functionality includes an AI model.
  • AI function information is configured by a third network element.
  • the third network element may be an access network device or a core network device.
  • the AI function information may not be configured by the third network element.
  • the third network element is omitted.
  • the access network equipment includes, for example, a new radio node (NR Node B, gNB).
  • NR Node B gNB
  • the core network device includes, for example, a module in the core network device for implementing a positioning management function, such as a positioning management function (LMF) module, or an access and mobility management function (Access and Mobility Management Function) module.
  • a positioning management function such as a positioning management function (LMF) module
  • an access and mobility management function Access and Mobility Management Function
  • the content corresponding to the first information is not the same.
  • the first network element sends the first information to the second network element, wherein the first information includes the AI function information running on the terminal, generally including the following situations:
  • the AI function information is not configured by the third network element. Therefore, for the -C) and -D) cases, the third network element is omitted.
  • Step S2102 The second network element 102 communicates based on the first information.
  • steps S2101 and S2102 may be executed in an interchangeable order or simultaneously.
  • step S2102 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • step S2201 the terminal sends first information to the core network device.
  • the first information includes at least one of the following: AI function configured by the third network element, AI function processing time, AI function computing power, the capability of AI functions and/or AI models that can be run simultaneously, the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously.
  • the first AI function is an AI function configured in the third network element, except for the AI function for CSI prediction, or the AI function for CSI compression and the AI function for beam management.
  • the AI function configured by the third network element can also be understood to include at least one of the following: an AI function configured by the access network device for CSI prediction, an AI function configured by the access network device for CSI compression, an AI function configured by the access network device for beam management, and a first AI function configured by the access network device.
  • the AI function processing time includes at least one of the following: the running time of the AI function, the start time of the AI function running, or the end time of the AI function running.
  • the AI function computing capability includes at least one of the following: computing capability that can be used for CSI management, computing capability that can be used for beam management, computing capability for the first AI function configured by a third network element, or computing capability for the AI function configured by a second network element.
  • the computing power that can be used for CSI management includes: computing power that can be used for CSI prediction, and/or computing power that can be used for CSI compression.
  • the AI function configured by the second network element includes, for example: an AI positioning function.
  • the computing capability of the AI function that can be used for the second network element configuration includes, for example: computing capability based on AI positioning that can be used for the second network element configuration.
  • the AI function computing capability includes at least one of the following: computing capability that can be used for CSI prediction, computing capability that can be used for CSI compression, computing capability that can be used for beam management, computing capability that can be used for the first AI function configured in an access network device, or computing capability that can be used for AI positioning-based configuration in a core network device.
  • AI functions or AI models mentioned in the above content are all models that meet the applicable conditions of the terminal.
  • the ability to run simultaneously can be understood as the ability to run which AI functions, which AI models, or which AI functions and AI models can be run simultaneously based on the terminal's own capabilities.
  • the above-mentioned AI functions and/or AI model capabilities that can be run simultaneously can be understood to include at least one of the following: AI functions configured by the core network that can be run simultaneously by the terminal, or AI models configured by the core network that can be run simultaneously by the terminal.
  • the core network device can determine the access network device running on the terminal to configure the ability of the AI function and/or AI model to run simultaneously, and then the core network device can configure the terminal with appropriate AI functions and/or AI models according to the ability. For example, based on the capability of the terminal, the core network device can configure the terminal with an AI function or AI model that matches its ability to run simultaneously. For example, the terminal supports the simultaneous running of an AI model for CSI prediction and an AI model for CSI compression. Based on the capability of the terminal, the core network device can configure the terminal with an AI model for CSI prediction and/or an AI model for CSI compression.
  • the terminal supports the simultaneous operation of the AI function for beam management and the AI function for positioning. Based on the capability of the terminal, the core network device can configure the AI function for beam management and/or the AI function for positioning for the terminal. For another example, the terminal supports the simultaneous operation of the AI function for beam management and the AI function for positioning. When the core network device configures the AI function for beam management and/or the AI function for positioning for the terminal, the terminal can support the simultaneous operation of the above functions.
  • the terminal can simultaneously operate the AI function for beam management and the AI function for positioning, but cannot simultaneously operate the AI function for beam management, the AI function for positioning, and the AI function for CSI prediction.
  • the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously include at least one of the following: the number of AI functions configured by the second network element that can be run simultaneously by the first network element, or the number of AI models configured by the second network element that can be run simultaneously by the first network element.
  • the above-mentioned AI functions and/or AI model capabilities that can be run simultaneously can be understood to include at least one of the following: the number of AI functions configured in the core network that the terminal can run simultaneously, or the number of AI models configured in the core network that the terminal can run simultaneously.
  • the terminal is capable of simultaneously running N AI models configured by LMF. Based on this capability of the terminal, LMF configures M AI models for the terminal.
  • the first information can be understood to include at least one of the following:
  • the first information is carried by at least one of the following: sending the first information to the LMF based on the Long Term Evolution Positioning Protocol (LPP), and/or based on the side link positioning protocol (Sidelink Positioning Protocol, SLPP).
  • LPF Long Term Evolution Positioning Protocol
  • SLPP Sidelink Positioning Protocol
  • terms such as “uplink”, “uplink”, “physical uplink” can be interchangeable, and terms such as “downlink”, “downlink”, “physical downlink” can be interchangeable, and terms such as “side”, “sidelink”, “side communication”, “sidelink communication”, “direct connection”, “direct link”, “direct communication”, “direct link communication” can be interchangeable.
  • the first information includes at least one of the following: a second AI function, AI function processing time, AI function computing power, the capabilities of AI functions and/or AI models that can be run simultaneously, or the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously.
  • the second AI function is an AI function configured in the third network element except for the AI function used for positioning.
  • step S2202 the core network device communicates based on the first information.
  • the core network device communicates based on the AI function configured by the access network device included in the first information.
  • the core network device configures a matching AI function for the terminal, thereby enabling communication of the terminal.
  • step S2101 the relevant description in step S2201, which will not be repeated here.
  • the core network device configures a matching AI function for the terminal according to the processing time of the AI function configured by the access network device for the terminal, thereby enabling communication of the terminal.
  • step S2101 For the core network device to configure the terminal with a matching AI function based on the processing time of the AI function, relevant exemplary instructions can be found in step S2101 and the relevant description in step S2201, which will not be repeated here.
  • the core network device communicates based on the AI function computing capability included in the first information.
  • the core network device matches the corresponding AI function for the terminal according to the computing power of the AI function configured by the access network device running the terminal, thereby realizing communication of the terminal.
  • step S2101 For the core network device to configure the terminal with matching AI functions based on the AI function computing capability, relevant exemplary instructions can be found in step S2101 and the relevant description in step S2201, which will not be repeated here.
  • the core network device communicates based on the capabilities of the AI functions and/or AI models running simultaneously included in the first information.
  • the core network device configures matching AI functions and/or AI models for the terminal based on the ability of the access network device running the terminal to simultaneously run AI functions and/or AI models, thereby achieving communication.
  • the core network device communicates based on the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously included in the first information.
  • the core network device configures matching AI functions and/or AI models for the terminal according to the number of AI functions that the terminal can run simultaneously and/or the number of AI models that can run simultaneously.
  • relevant exemplary instructions please refer to step S2101 and the relevant description in step S2201, which will not be repeated here.
  • step S2202 can refer to the optional implementation of step S2102 in FIG. 2A and other related parts in the embodiment involved in FIG. 2A , which will not be described in detail here.
  • the communication method involved in the embodiment of the present disclosure may include at least one of step S2201 to step S2202.
  • step S2201 may be implemented as an independent embodiment
  • step S2202 may be implemented as an independent embodiment
  • step S2201+step S2202 may be implemented as independent embodiments, but are not limited thereto.
  • steps S2201 and S2202 may be executed in an interchangeable order or simultaneously.
  • step S2202 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG. 2C is an interactive schematic diagram of a communication method corresponding to case -B) according to an embodiment of the present disclosure.
  • a communication method in case -B) is disclosed, wherein the first network element is a terminal, and the second network element is an access network device.
  • the method includes:
  • the AI function computing capability includes at least one of the following: computing capability for AI positioning configured by a third network element, computing capability for a second AI function configured by a third network element, computing capability that can be used for CSI management, computing capability that can be used for beam management, or computing capability for an AI function configured by a second network element.
  • the access network device configures a matching AI function for the terminal based on the AI function computing capability when it learns the AI function computing capability configured by the core network device for the terminal. For example, based on the computing capability of the terminal, the access network device can configure an AI function that matches its computing capability, or a combination of AI functions, for the terminal. For example, the computing capability of the terminal can only support the computing capability for CSI prediction, then the access network device matches the AI function for CSI prediction for it based on its computing capability.
  • the above-mentioned AI functions and/or AI model capabilities that can be run simultaneously can be understood to include at least one of the following: AI functions configured by the access network that can be run simultaneously by the terminal, or AI models configured by the access network that can be run simultaneously by the terminal.
  • the AI function configured by the second network element includes, for example: an AI positioning function.
  • the access network device can determine the ability of the core network device running the terminal to configure the AI function and/or AI model to run simultaneously, and then the access network device can configure the terminal with appropriate AI functions and/or AI models according to the capability. For example, based on the capability of the terminal, the access network device can configure the terminal with an AI function or AI model that matches its ability to run simultaneously. For example, the terminal supports the simultaneous running of an AI model for CSI prediction and an AI model for CSI compression. Based on the capability of the terminal, the access network device can configure the terminal with an AI model for CSI prediction and/or an AI model for CSI compression.
  • the terminal supports the simultaneous operation of the AI function for beam management and the AI function for positioning. Based on the capability of the terminal, the access network device can configure the AI function for beam management and/or the AI function for positioning for the terminal. For another example, the terminal supports the simultaneous operation of the AI function for beam management and the AI function for positioning. When the access network device configures the AI function for beam management and/or the AI function for positioning for the terminal, the terminal can support the simultaneous operation of the above functions.
  • the terminal can simultaneously operate the AI function for beam management and the AI function for positioning, but cannot simultaneously operate the AI function for beam management, the AI function for positioning, and the AI function for CSI prediction.
  • the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously include at least one of the following: the number of AI functions configured by the second network element that can be run simultaneously by the first network element, or the number of AI models configured by the second network element that can be run simultaneously by the first network element.
  • the AI functions or AI models mentioned in the above content are all models that meet the applicable conditions of the terminal.
  • the number that can run simultaneously can be understood as the maximum number of AI functions that can run simultaneously, the maximum number of AI models that can run simultaneously, and the maximum number of AI functions and AI models that can run simultaneously based on the capabilities of the terminal itself.
  • the access network device can determine the number of AI functions that can be run at the same time as the core network device configuration run by the terminal, and/or the number of AI models that can be run simultaneously. Then, the access network device can configure the appropriate number of AI functions and/or AI models for the terminal according to this capability. For example, based on the capability of the terminal, the access network device can configure the terminal with AI functions or AI models that match the number of simultaneous operations. For example, the terminal can simultaneously run N AI functions configured by LMF (N is an integer greater than or equal to 1), and the access network device configures M AI functions for the terminal based on the capability of the terminal (M is an integer greater than or equal to 1 and less than or equal to N).
  • N is an integer greater than or equal to 1
  • M is an integer greater than or equal to 1 and less than or equal to N).
  • the terminal can simultaneously run N AI models configured by LMF, and the access network device configures M AI models for the terminal based on this capability of the terminal.
  • the access network device communicates based on the AI function configured by the core network device included in the first information.
  • the relevant exemplary instructions can be found in step S2101 and the relevant description in step S2301, which will not be repeated here.
  • the access network device communicates based on the AI function computing capability included in the first information.
  • the access network device matches the corresponding AI function for the terminal according to the computing power of the AI function configured by the core network device on which the terminal runs, thereby realizing communication of the terminal.
  • step S2101 the relevant description in step S2301, which will not be repeated here.
  • the access network device communicates based on the capabilities of the AI functions and/or AI models running simultaneously included in the first information.
  • the access network device configures matching AI functions and/or AI models for the terminal based on the ability of the core network device running the terminal to simultaneously run AI functions and/or AI models, thereby achieving communication.
  • the access network device configures matching AI functions for the terminal based on the AI functions running simultaneously and/or the capabilities of the AI model.
  • relevant exemplary instructions please refer to step S2101 and the relevant description in step S2301, which will not be repeated here.
  • the access network device communicates based on the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously included in the first information.
  • the access network device configures matching AI functions and/or AI models for the terminal according to the number of AI functions that can be run simultaneously and/or the number of AI models that can be run simultaneously configured by the core network device on which the terminal runs.
  • step S2302 can refer to the optional implementation of step S2102 in FIG. 2A and other related parts in the embodiment involved in FIG. 2A , which will not be described in detail here.
  • the communication method involved in the embodiment of the present disclosure may include at least one of steps S2301 to S2302.
  • step S2101 can be implemented as an independent embodiment
  • step S2302 can be implemented as an independent embodiment
  • step S2301 + step S2302 can be implemented as independent embodiments, but are not limited thereto.
  • steps S2301 and S2302 may be executed in an interchangeable order or simultaneously.
  • step S2302 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG. 2D is an interactive schematic diagram of a communication method corresponding to case -C) according to an embodiment of the present disclosure.
  • a communication method in case -C) is disclosed, the first network element is a terminal, the second network element is an access network device, and the method includes:
  • the AI function configured by the access network device for the terminal includes at least one of the following: an AI function configured by the access network device for CSI prediction for the terminal, an AI function configured by the access network device for CSI compression for the terminal, or an AI function configured by the access network device for beam management for the terminal.
  • the core network device can determine the processing time of the AI function configured in the access network currently running the terminal, and then the core network device can configure the appropriate AI function for the terminal according to the processing time of the AI function. For example, considering the limited terminal capabilities, the core network device can configure the AI function for the terminal after the terminal ends the operation of the AI function configured by the access network device, such as AI-based positioning.
  • the core network device can adjust the AI function configured by the access network device for the terminal based on this capability.
  • the first information can be understood to include at least one of the following:
  • step S2402 the core network device communicates based on the first information.
  • the core network device configures a matching AI function for the terminal, thereby enabling communication of the terminal.
  • step S2101 the relevant description in step S2401, which will not be repeated here.
  • the core network device communicates based on the AI function processing time included in the first information.
  • the core network device configures a matching AI function for the terminal according to the processing time of the AI function configured by the access network device for the terminal, thereby enabling communication of the terminal.
  • step S2101 and step S2401 which will not be repeated here.
  • the core network device communicates based on the AI function computing capability included in the first information.
  • the core network device matches the corresponding AI function for the terminal according to the computing power of the AI function configured by the access network device running the terminal, thereby realizing communication of the terminal.
  • step S2101 can refer to step S2101 and the relevant description in step S2401, which will not be repeated here.
  • the optional implementation of step S2402 can refer to the optional implementation of step S2102 in Figure 2A and other related parts of the embodiment involved in Figure 2A, which will not be repeated here.
  • steps S2401 and S2402 may be executed in an interchangeable order or simultaneously.
  • step S2402 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG. 2E is an interactive schematic diagram of a communication method corresponding to the -D) situation shown in an embodiment of the present disclosure.
  • a communication method in the -D) situation is disclosed, the first network element is a core network device, the second network element is an access network device, and the method includes:
  • step S2501 the core network device sends first information to the access network device.
  • step S2501 can refer to the optional implementation of step S2101 in FIG. 2A and other related parts in the embodiment involved in FIG. 2A , which will not be described in detail here.
  • the first information includes at least one of the following:
  • the AI function for positioning configured by the core network device for the terminal, the fourth AI function, the computing power of the terminal for positioning, the computing power of the terminal for the AI function configured by the core network device, the computing power of the terminal that can be used for the AI function configured by the access network device, and the AI function processing time.
  • the fourth AI function is an AI function configured by the core network device for the terminal, except for the AI function used for positioning.
  • the access network device can configure a matching AI function for the terminal after learning the AI function configured for the terminal by the core network device.
  • the AI function processing time includes at least one of the following: the running time of the AI function, the start time of the AI function running, or the end time of the AI function running.
  • the access network device can determine the processing time of the AI function configured in the core network currently running the terminal, and then the access network device can configure the appropriate AI function for the terminal according to the processing time of the AI function. For example, considering the limited terminal capabilities, the access network device can configure the AI function for the terminal after the terminal ends the operation of the AI function configured by the core network device, such as AI-based positioning.
  • terms such as “certain”, “preset”, “preset”, “set”, “indicated”, “some”, “any”, and “first” can be interchangeable, and "specific A”, “preset A”, “preset A”, “set A”, “indicated A”, “some A”, “any A”, and “first A” can be interpreted as A pre-defined in a protocol, etc., or as A obtained through setting, configuration, or indication, etc., and can also be interpreted as specific A, some A, any A, or first A, etc., but is not limited to this.
  • radio wireless
  • RAN radio access network
  • AN access network
  • RAN-based and the like
  • synchronization signal SS
  • synchronization signal block SSB
  • reference signal RS
  • pilot pilot signal
  • terms such as “moment”, “time point”, “time”, and “time position” can be interchangeable, and terms such as “duration”, “period”, “time window”, “window”, and “time” can be interchangeable.
  • step S2502 the access network device communicates based on the first information.
  • the access network device communicates based on the AI function configured by the core network device included in the first information.
  • the access network device configures a matching AI function for the terminal, thereby enabling communication of the terminal.
  • step S2101 the relevant description in step S2501, which will not be repeated here.
  • the access network device communicates based on the AI function computing capability included in the first information.
  • the access network device matches the corresponding AI function for the terminal according to the computing power of the AI function configured by the core network device on which the terminal runs, thereby realizing communication of the terminal.
  • step S2101 the relevant description in step S2501, which will not be repeated here.
  • the access network device communicates based on the AI function processing time included in the first information.
  • the access network device configures a matching AI function for the terminal according to the processing time of the AI function configured by the core network device for the terminal, thereby enabling communication of the terminal.
  • step S2101 For the access network device to configure the terminal with a matching AI function based on the processing time of the AI function, relevant exemplary instructions can be found in step S2101 and the relevant description in step S2501, which will not be repeated here.
  • step S2502 can refer to the optional implementation of step S2102 in FIG. 2A and other related parts in the embodiment involved in FIG. 2A , which will not be described in detail here.
  • the communication method involved in the embodiment of the present disclosure may include at least one of step S2501 to step S2502.
  • step S2501 may be implemented as an independent embodiment
  • step S2502 may be implemented as an independent embodiment
  • step S2501+step S2502 may be implemented as independent embodiments, but are not limited thereto.
  • steps S2501 and S2502 may be executed in an interchangeable order or simultaneously.
  • step S2502 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG3 is a flow chart of a communication method according to an embodiment of the present disclosure. As shown in FIG3, the present disclosure embodiment relates to a communication method, and the method includes:
  • Step S3101 sending the first information.
  • the optional implementation method of step S3101 can refer to the optional implementation method of step S2101 in Figure 2A, the optional implementation method of step S2201 in Figure 2B, the optional implementation method of step S2301 in Figure 2C, the optional implementation method of step S2401 in Figure 2D, the optional implementation method of step S2501 in Figure 2E and other related parts in the embodiments involved in Figures 2A, 2B, 2C, 2D and 2E, which will not be repeated here.
  • the first information includes artificial intelligence (AI) function information running on the terminal.
  • AI artificial intelligence
  • the AI function information is configured by a third network element.
  • the first network element is a terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the AI function information is configured by a third network element, the first network element is a terminal, the second network element is a core network device, and the third network element is an access network device.
  • the first information includes at least one of the following: an AI function for CSI prediction configured by the third network element; an AI function for CSI compression configured by the third network element; an AI function for beam management configured by the third network element; a first AI function configured by the third network element, the first AI function being an AI function configured by the third network element, except for the AI function for CSI prediction, the AI function for CSI compression, and the AI function for beam management; operation of the AI function running time; the start time of the AI function operation; the end time of the AI function operation; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the first AI function configured in a third network element; the computing power that can be used for the AI function configured in a second network element; the
  • the AI function information is configured by a third network element, the first network element is a terminal, the second network element is a core network device, and the third network element is an access network device, and the first information is carried on at least one of the following: LPP, or SLPP.
  • the first network element is the terminal
  • the second network element is an access network device
  • the third network element is a core network device.
  • AI function information is configured by a third network element, and when the first network element is a terminal, the second network element is an access network device, and the third network element is a core network device, the first information includes at least one of the following: an AI function for positioning configured by the third network element; a second AI function, where the second AI function is an AI function other than the AI function for positioning among the AI functions configured by the third network element; the running time of the AI function; the start time of the AI function running; the end time of the AI function running; the computing power that can be used for AI positioning configured by the third network element; the computing power that can be used for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that the first network element can run simultaneously; the number of AI models configured by the second network element that the first
  • the first network element is an access network device
  • the second network element is a core network device
  • the first information includes at least one of the following: an AI function for CSI prediction configured by the access network device for the terminal; an AI function for CSI compression configured by the access network device for the terminal; an AI function for beam management configured by the access network device for the terminal; a third AI function, the third function being an AI function configured for the terminal among the AI functions configured by the access network device, except the AI function for CSI prediction, the AI function for CSI compression and the AI function for beam management; the computing power of the terminal that can be used for AI positioning-based configuration of the core network device; the computing power of the terminal that can be used for the AI function configured by the core network device; the computing power of the terminal for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; the end time of the running of the AI function of the terminal.
  • the first information when the first network element is a core network device and the second network element is an access network device, the first information includes at least one of the following: an AI function for positioning configured by the core network device for the terminal; a fourth AI function, where the fourth AI function is an AI function configured by the core network device for the terminal, except the AI function for positioning; the computing power of the terminal for positioning; the computing power of the terminal for the AI function configured by the core network device; the computing power of the terminal that can be used for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; and the end time of the running of the AI function of the terminal.
  • Step S3102 communicate based on the first information.
  • step S3101 may be implemented as an independent embodiment
  • step S3102 may be implemented as an independent embodiment
  • step S3101+step S3102 may be implemented as independent embodiments, but are not limited thereto.
  • steps S3101 and S3102 may be executed in an interchangeable order or simultaneously.
  • step S3102 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG4A is a flow chart of a communication method according to an embodiment of the present disclosure. As shown in FIG4A , the present disclosure embodiment relates to a communication method, and the method includes:
  • Step S4101 obtaining first information.
  • the second network element 102 receives the first information sent by the first network element 101, but is not limited thereto, and may also receive the first information sent by other entities.
  • the second network element 102 obtains first information specified by the protocol.
  • the second network element 102 obtains the first information from an upper layer(s).
  • the second network element 102 performs processing to obtain the first information.
  • step S3101 is omitted, and the second network element 102 autonomously implements the function indicated by the first information, or the above function is default or default.
  • the optional implementation method of step S4101 can refer to the optional implementation method of step S2101 in Figure 2A, the optional implementation method of step S2201 in Figure 2B, the optional implementation method of step S2301 in Figure 2C, the optional implementation method of step S2401 in Figure 2D, the optional implementation method of step S2501 in Figure 2E, the optional implementation method of step S3101 in Figure 3 and other related parts in the embodiments involved in Figures 2A, 2B, 2C, 2D, 2E and 3, which will not be repeated here.
  • Step S4102 communicate based on the first information.
  • the optional implementation method of step S4102 can refer to the optional implementation method of step S2102 in Figure 2A, the optional implementation method of step S2201 in Figure 2B, the optional implementation method of step S2301 in Figure 2C, the optional implementation method of step S2401 in Figure 2D, the optional implementation method of step S2501 in Figure 2E and other related parts in the embodiments involved in Figures 2A, 2B, 2C, 2D and 2E, which will not be repeated here.
  • step S4101 may be implemented as an independent embodiment
  • step S4102 may be implemented as an independent embodiment
  • step S4101+step S4102 may be implemented as independent embodiments, but are not limited thereto.
  • steps S4101 and S4102 may be executed in an interchangeable order or simultaneously.
  • step S4102 is optional, and one or more of these steps may be omitted or replaced in different embodiments.
  • FIG4B is a flow chart of a communication method according to an embodiment of the present disclosure. As shown in FIG4B , the present disclosure embodiment relates to a communication method. The above methods include:
  • Step S4201 receiving first information.
  • the optional implementation method of step S4201 can refer to the optional implementation method of step S2101 in Figure 2A, the optional implementation method of step S2201 in Figure 2B, the optional implementation method of step S2301 in Figure 2C, the optional implementation method of step S2401 in Figure 2D, the optional implementation method of step S2501 in Figure 2E, the optional implementation method of step S3101 in Figure 3 and other related parts in the embodiments involved in Figures 2A, 2B, 2C, 2D, 2E and 3, which will not be repeated here.
  • the first information is sent by the first network element, and the first information includes AI function information running on the terminal.
  • the AI function information is configured by a third network element.
  • the first network element is a terminal
  • the second network element is a core network device
  • the third network element is an access network device.
  • the AI function information is configured by a third network element, and when the first network element is a terminal, the second network element is a core network device, and the third network element is an access network device, the first information includes at least one of the following: an AI function for channel state information CSI prediction configured by the third network element; an AI function for channel state information CSI compression configured by the third network element; an AI function for beam management configured by the third network element; a first AI function configured by the third network element, and the first AI function is an AI function configured by the third network element, except for the AI function for CSI prediction, the AI function for CSI compression, and the AI function for beam management.
  • the running time of the AI function The start time of the AI function running; The end time of the AI function running; The computing power that can be used for CSI prediction; The computing power that can be used for CSI compression; The computing power that can be used for beam management; The computing power that can be used for the first AI function configured in the third network element; The computing power that can be used for the AI function configured in the second network element; The number of AI functions configured in the second network element that can be run simultaneously by the first network element; The number of AI models configured in the second network element that can be run simultaneously by the first network element; The AI functions configured in the second network element that can be run simultaneously by the first network element; The AI models configured in the second network element that can be run simultaneously by the first network element.
  • the AI function information is configured by a third network element, the first network element is a terminal, the second network element is a core network device, and the third network element is an access network device, and the first information is carried on at least one of the following: LPP, or SLPP.
  • the first network element is a terminal
  • the second network element is an access network device
  • the third network element is a core network device.
  • AI function information is configured by a third network element, and when the first network element is a terminal, the second network element is an access network device, and the third network element is a core network device, the first information includes at least one of the following: an AI function for positioning configured by the third network element; a second AI function, where the second AI function is an AI function other than the AI function for positioning among the AI functions configured by the third network element; the running time of the AI function; the start time of the AI function running; the end time of the AI function running; the computing power that can be used for AI positioning configured by the third network element; the computing power that can be used for the second AI function configured by the third network element; the computing power that can be used for CSI prediction; the computing power that can be used for CSI compression; the computing power that can be used for beam management; the computing power that can be used for the AI function configured by the second network element; the number of AI functions configured by the second network element that the first network element can run simultaneously; the number of AI models configured by the second network element that the first
  • the AI function information is configured by a third network element, the first network element is a terminal, the second network element is an access network device, and the third network element is a core network device, and the first information is carried in the wireless resource control RRC signaling.
  • the first network element is an access network device
  • the second network element is a core network device
  • the AI function information is configured by a third network element, and when the first network element is an access network device and the second network element is a core network device, the first information includes at least one of the following: an AI function for CSI prediction configured by the access network device for the terminal; an AI function for CSI compression configured by the access network device for the terminal; an AI function for beam management configured by the access network device for the terminal; a third function, wherein the third function is an AI function configured for the terminal among the AI functions configured by the access network device, except for the AI function for CSI prediction, the AI function for CSI compression and the AI function for beam management; the computing capacity of the terminal that can be used for AI positioning-based configuration of the core network device; the computing capacity of the terminal that can be used for the AI function configured by the core network device; the computing capacity of the terminal for the AI function configured by the access network device; the running time of the AI function of the terminal; the start time of the running of the AI function of the terminal; the end time of the running of the AI function of the terminal.
  • the first network element is a core network device
  • the second network element is an access network device
  • AI function information is configured by a third network element, the first network element is a core network device, the second network element is an access network device, the core network device configures the terminal with an AI function for positioning; a fourth AI function, the fourth AI function is the AI function configured by the core network device for the terminal, except the AI function for positioning; the computing power of the terminal for positioning; the computing power of the terminal for the AI function configured by the core network device; the computing power of the terminal that can be used for the AI function configured by the access network device; the running time of the terminal's AI function; the start time of the terminal's AI function running; the end time of the terminal's AI function running.
  • FIG5 is an interactive schematic diagram of a communication method according to an embodiment of the present disclosure. As shown in FIG5 , the present disclosure embodiment relates to a communication method, and the method includes:
  • Step S5101 The first network element 101 sends first information to the second network element 102.
  • step S5101 can refer to step S2101 in Figure 2A, step S3101 in Figure 3, step S4101 in Figure 4A, step S4201 in Figure 4B, and other related parts in the embodiments involved in Figures 2A, 3, 4A and 4B, which will not be repeated here.
  • the first information includes artificial intelligence (AI) function information running on the terminal.
  • AI artificial intelligence
  • the above method may include the method described in the above embodiments of the communication system side, terminal side, network device side, etc., which will not be repeated here.
  • a communication method is proposed to enable a terminal to face the problem of coordination between different AI functions and/or AI models from access network devices and core network devices.
  • the AI model or AI function used by the terminal is coordinated, including at least one of the following situations:
  • the terminal provides information to the access network device indicating the AI functions managed by other core network devices running on the terminal.
  • the terminal mentioned in the embodiments of the present disclosure can be written as "UE", the access network device mentioned can be written as “gNB”, and the core network device mentioned can be "LMF" or "CN".
  • the information includes at least one of the following: information on the running of the AI function, time information on the running of the AI function, floating point number information, the number of AI functions for positioning that can be run simultaneously, the number of AI models for positioning that can be run simultaneously, applicable AI models, or applicable AI functions.
  • AI functions managed by the access network device can be understood as AI functions managed by the access network device other than the above-listed functions.
  • the time information of the running of the AI function includes at least one of the following: the running time of the AI function, the start time of the AI function or the stop time of the AI function.
  • the floating point information includes at least one of the following: floating point information for CSI prediction, floating point information for beam management, floating point information for positioning, or floating point information of other AI functions managed by access network equipment.
  • the number of AI functions for positioning that can be run simultaneously can be understood as the maximum number of AI functions that can be applied by the terminal based on the capabilities of the terminal and managed via the core network device.
  • the information on the operation of the AI function includes at least one of the following: a function for CSI prediction, a function for beam management, or other AI functions for core network device management.
  • AI functions managed by the core network device can be understood as AI functions managed by the core network device other than the above-listed functions.
  • the time information of the running of the AI function includes at least one of the following: the running time of the AI function, the start time of the AI function or the stop time of the AI function.
  • the floating point information includes at least one of the following: floating point information for CSI prediction, floating point information for beam management, floating point information for positioning, or floating point information of other AI functions managed by access network equipment.
  • the number of AI functions for positioning that can be run simultaneously can be understood as the maximum number of AI functions that can be applied by the terminal based on the capabilities of the terminal, via management of the access network device.
  • the number of AI models for positioning that can be run simultaneously can be understood as the maximum number of AI models that can be applied by the terminal based on the capabilities of the terminal and managed via the access network device.
  • the information may be sent via RRC signaling, such as via UAI.
  • the information includes at least one of the following:
  • Function used for CSI prediction function used for beam management, other AI functions managed by access network equipment, floating point numbers used for positioning, floating point numbers used for AI functions managed by access network equipment, running time of AI functions, start time of AI functions, or AI functions stop time.
  • the information includes at least one of the following:
  • Function used for CSI prediction function used for beam management, other AI functions managed by core network equipment, floating point numbers used for positioning, floating point numbers used for AI functions managed by access network equipment, running time of AI functions, start time of AI functions, or stop time of AI functions.
  • part or all of the steps and their optional implementations may be arbitrarily combined with part or all of the steps in other embodiments, or may be arbitrarily combined with optional implementations of other embodiments.
  • the embodiments of the present disclosure also propose a device for implementing any of the above methods, for example, a device is proposed, the above device includes a unit or module for implementing each step performed by the terminal in any of the above methods.
  • a device is also proposed, including a unit or module for implementing each step performed by a network device (such as an access network device, a core network function node, a core network device, etc.) in any of the above methods.
  • a network device such as an access network device, a core network function node, a core network device, etc.
  • the division of the units or modules in the above device is only a division of logical functions, which can be fully or partially integrated into one physical entity or physically separated in actual implementation.
  • the units or modules in the device can be implemented in the form of a processor calling software: for example, the device includes a processor, the processor is connected to a memory, and instructions are stored in the memory.
  • the processor calls the instructions stored in the memory to implement any of the above methods or implement the functions of the units or modules of the above device, wherein the processor is, for example, a general-purpose processor, such as a central processing unit (CPU) or a microprocessor, and the memory is a memory inside the device or a memory outside the device.
  • CPU central processing unit
  • microprocessor a microprocessor
  • the units or modules in the device may be implemented in the form of hardware circuits, and the functions of some or all of the units or modules may be implemented by designing the hardware circuits.
  • the hardware circuits may be understood as one or more processors; for example, in one implementation, the hardware circuits are application-specific integrated circuits (ASICs), and the functions of some or all of the above units or modules may be implemented by designing the logical relationship of the components in the circuits; for another example, in another implementation, the hardware circuits may be implemented by programmable logic devices (PLDs), and Field Programmable Gate Arrays (FPGAs) may be used as an example, which may include a large number of logic gate circuits, and the connection relationship between the logic gate circuits may be configured by configuring the configuration files, thereby implementing the functions of some or all of the above units or modules. All units or modules of the above devices may be implemented in the form of software called by the processor, or in the form of hardware circuits, or in the form of software called by the processor, and the remaining part may be implemented in
  • the processor is a circuit with signal processing capability.
  • the processor may be a circuit with instruction reading and running capability, such as a central processing unit (CPU), a microprocessor, a graphics processing unit (GPU) (which may be understood as a microprocessor), or a digital signal processor (DSP); in another implementation, the processor may implement certain functions through the logical relationship of a hardware circuit, and the logical relationship of the above hardware circuit may be fixed or reconfigurable, such as a hardware circuit implemented by an application-specific integrated circuit (ASIC) or a programmable logic device (PLD), such as an FPGA.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the process of the processor loading a configuration document to implement the hardware circuit configuration may be understood as the process of the processor loading instructions to implement the functions of some or all of the above units or modules.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as ASIC, such as Neural Network Processing Unit (NPU), Tensor Processing Unit (TPU), Deep Learning Processing Unit (DPU), etc.
  • ASIC Neural Network Processing Unit
  • NPU Neural Network Processing Unit
  • TPU Tensor Processing Unit
  • DPU Deep Learning Processing Unit
  • Figure 6A is a schematic diagram of the structure of the first network element proposed in an embodiment of the present disclosure.
  • the first network element 6100 may include: a transceiver module 6101 and a processing module 6102.
  • the above-mentioned transceiver module 6101 is used to send first information to the second network element, wherein the first information includes artificial intelligence AI function information running on the terminal.
  • the above-mentioned transceiver module 6101 is used to execute at least one of the communication steps such as sending and/or receiving (for example, step S2101, but not limited to this) performed by the first network element 101 in any of the above methods, which will not be repeated here.
  • the processing module 6102 is used to communicate based on the first information.
  • the processing module 6102 is used to execute at least one of the processing steps (such as step S2102, but not limited thereto) executed by the second network element 102 in any of the above methods, which will not be described in detail here.
  • FIG. 6B is a schematic diagram of the structure of the second network element proposed in an embodiment of the present disclosure.
  • the second network element 6200 may include: a transceiver module 6201.
  • the transceiver module 6201 is used to receive first information from the first network element, wherein the first information is sent by the first network element, and the first information includes artificial intelligence AI function information running on the terminal.
  • the transceiver module 6201 is used to execute at least one of the communication steps such as sending and/or receiving (for example, step S2101, but not limited to this) performed by the second network element 102 in any of the above methods, which will not be repeated here.
  • the processing module 6202 is used to communicate based on the first information, wherein the first information is sent by the first network element, and the first information includes artificial intelligence AI function information running on the terminal.
  • the processing module 6202 is used to execute at least one of the processing steps (such as step S2102, but not limited to this) executed by the second network element 102 in any of the above methods, which will not be repeated here. State.
  • the transceiver module may include a sending module and/or a receiving module, and the sending module and the receiving module may be separate or integrated.
  • the transceiver module may be interchangeable with the transceiver.
  • the processing module can be a module or include multiple submodules.
  • the multiple submodules respectively execute all or part of the steps required to be executed by the processing module.
  • the processing module can be replaced with the processor.
  • FIG7A is a schematic diagram of the structure of a communication device 7100 proposed in an embodiment of the present disclosure.
  • the communication device 7100 may be a network device (e.g., an access network device, a core network device, etc.), or a terminal (e.g., a user device, etc.), or a chip, a chip system, or a processor that supports a network device to implement any of the above methods, or a chip, a chip system, or a processor that supports a terminal to implement any of the above methods.
  • the communication device 7100 may be used to implement the method described in the above method embodiment, and the details may refer to the description in the above method embodiment.
  • the communication device 7100 includes one or more processors 7101.
  • the processor 7101 may be a general-purpose processor or a dedicated processor, for example, a baseband processor or a central processing unit.
  • the baseband processor may be used to process the communication protocol and the communication data
  • the central processing unit may be used to control the communication device (such as a base station, a baseband chip, a terminal device, a terminal device chip, a DU or a CU, etc.), execute the program, and process the data of the program.
  • the communication device 7100 is used to execute any of the above methods.
  • one or more processors 7101 are used to call instructions so that the communication device 7100 executes any of the above methods.
  • the communication device 7100 further includes one or more transceivers 7102.
  • the transceiver 7102 performs at least one of the communication steps such as sending and/or receiving in the above method (for example, step S2101, but not limited thereto), and the processor 7101 performs at least one of the other steps (for example, step S2102, but not limited thereto).
  • the transceiver may include a receiver and/or a transmitter, and the receiver and the transmitter may be separated or integrated together.
  • the terms such as transceiver, transceiver unit, transceiver, transceiver circuit, interface circuit, interface, etc. may be replaced with each other, the terms such as transmitter, transmitting unit, transmitter, transmitting circuit, etc. may be replaced with each other, and the terms such as receiver, receiving unit, receiver, receiving circuit, etc. may be replaced with each other.
  • the communication device 7100 further includes one or more memories 7103 for storing data.
  • the memories 7103 may also be outside the communication device 7100.
  • the communication device 7100 may include one or more interface circuits 7104.
  • the interface circuit 7104 is connected to the memory 7103, and the interface circuit 7104 may be used to receive data from the memory 7103 or other devices, and may be used to send data to the memory 7103 or other devices.
  • the interface circuit 7104 may read the data stored in the memory 7103 and send the data to the processor 7101.
  • the communication device 7100 described in the above embodiments may be a network device or a terminal, but the scope of the communication device 7100 described in the present disclosure is not limited thereto, and the structure of the communication device 7100 may not be limited by FIG. 7A.
  • the communication device may be an independent device or may be part of a larger device.
  • the communication device may be: 1) an independent integrated circuit IC, or a chip, or a chip system or subsystem; (2) a collection of one or more ICs, optionally, the above IC collection may also include a storage component for storing data and programs; (3) an ASIC, such as a modem; (4) a module that can be embedded in other devices; (5) a receiver, a terminal device, an intelligent terminal device, a cellular phone, a wireless device, a handheld device, a mobile unit, a vehicle-mounted device, a network device, a cloud device, an artificial intelligence device, etc.; (6) others, etc.
  • FIG. 7B is a schematic diagram of the structure of a chip 7200 provided in an embodiment of the present disclosure.
  • the communication device 7100 may be a chip or a chip system
  • the chip 7200 includes one or more processors 7201.
  • the chip 7200 is configured to execute any of the above methods.
  • the chip 7200 further includes one or more interface circuits 7202.
  • the terms interface circuit, interface, transceiver pin, etc. can be interchangeable.
  • the chip 7200 further includes one or more memories 7203 for storing data.
  • all or part of the memory 7203 can be outside the chip 7200.
  • the interface circuit 7202 is connected to the memory 7203, and the interface circuit 7202 can be used to receive data from the memory 7203 or other devices, and the interface circuit 7202 can be used to send data to the memory 7203 or other devices.
  • the interface circuit 7202 can read the data stored in the memory 7203 and send the data to the processor 7201.
  • the interface circuit 7202 performs at least one of the communication steps such as sending and/or receiving in the above method (for example, step S2101 but not limited thereto).
  • the interface circuit 7202 performs the communication steps such as sending and/or receiving in the above method, for example, means that the interface circuit 7202 performs data interaction between the processor 7201, the chip 7200, the memory 7203 or the transceiver device.
  • the processor 7201 performs at least one of the other steps (for example, step S2102 but not limited thereto).
  • modules and/or devices described in the embodiments such as virtual devices, physical devices, chips, etc. can be combined or separated as needed.
  • some or all steps can also be performed by multiple modules and/or devices in collaboration, which is not limited here.
  • the present disclosure also proposes a storage medium, on which instructions are stored.
  • the storage medium is an electronic storage medium.
  • the storage medium is an electronic storage medium.
  • the medium is a computer-readable storage medium, but is not limited thereto, and may also be a storage medium readable by other devices.
  • the storage medium may be a non-transitory storage medium, but is not limited thereto, and may also be a transient storage medium.
  • the present disclosure also proposes a program product, which, when executed by the communication device 7100, enables the communication device 7100 to execute any of the above methods.
  • the program product is a computer program product.
  • the present disclosure also proposes a computer program, which, when executed on a computer, causes the computer to execute any one of the above methods.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente divulgation concerne un procédé de communication, un premier élément de réseau, un second élément de réseau et un système de communication. Le procédé de communication comprend les étapes suivantes : un premier élément de réseau envoie des premières informations à un second élément de réseau, les premières informations comprenant des informations de fonction d'intelligence artificielle (IA) s'exécutant sur un terminal. Les modes de réalisation de la présente divulgation peuvent mettre en œuvre la coordination entre un terminal, un dispositif de réseau central et un dispositif de station de base pour un modèle d'IA déployé au niveau du terminal ou pour des fonctions d'IA.
PCT/CN2023/140443 2023-12-20 2023-12-20 Procédé de communication, premier élément de réseau, second élément de réseau et système de communication Pending WO2025129531A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2023/140443 WO2025129531A1 (fr) 2023-12-20 2023-12-20 Procédé de communication, premier élément de réseau, second élément de réseau et système de communication
CN202380097985.1A CN121080003A (zh) 2023-12-20 2023-12-20 通信方法、第一网元、第二网元及通信系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/140443 WO2025129531A1 (fr) 2023-12-20 2023-12-20 Procédé de communication, premier élément de réseau, second élément de réseau et système de communication

Publications (1)

Publication Number Publication Date
WO2025129531A1 true WO2025129531A1 (fr) 2025-06-26

Family

ID=96136132

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/140443 Pending WO2025129531A1 (fr) 2023-12-20 2023-12-20 Procédé de communication, premier élément de réseau, second élément de réseau et système de communication

Country Status (2)

Country Link
CN (1) CN121080003A (fr)
WO (1) WO2025129531A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835185A (zh) * 2021-09-18 2023-03-21 华为技术有限公司 一种人工智能模型下载方法、装置及系统
WO2023039905A1 (fr) * 2021-09-18 2023-03-23 Oppo广东移动通信有限公司 Procédé et appareil de transmission de données ai, dispositif, et support de stockage
CN116889009A (zh) * 2023-05-15 2023-10-13 北京小米移动软件有限公司 通信方法及装置、通信设备、通信系统
US20230354247A1 (en) * 2022-04-29 2023-11-02 Qualcomm Incorporated Machine learning model positioning performance monitoring and reporting

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835185A (zh) * 2021-09-18 2023-03-21 华为技术有限公司 一种人工智能模型下载方法、装置及系统
WO2023039905A1 (fr) * 2021-09-18 2023-03-23 Oppo广东移动通信有限公司 Procédé et appareil de transmission de données ai, dispositif, et support de stockage
US20230354247A1 (en) * 2022-04-29 2023-11-02 Qualcomm Incorporated Machine learning model positioning performance monitoring and reporting
CN116889009A (zh) * 2023-05-15 2023-10-13 北京小米移动软件有限公司 通信方法及装置、通信设备、通信系统

Also Published As

Publication number Publication date
CN121080003A (zh) 2025-12-05

Similar Documents

Publication Publication Date Title
WO2025000129A1 (fr) Procédé et appareil de transmission d'informations, dispositif de communication, système de communication et support de stockage
WO2025000303A1 (fr) Procédé d'indication d'informations, terminal, dispositif de réseau, système de communication et support de stockage
WO2025035460A1 (fr) Procédé de transmission d'informations de capacité, terminal, dispositif de réseau, système de communication et support
CN117136614A (zh) 配置方法及装置、通信设备、通信系统、存储介质
WO2025025117A1 (fr) Procédé de communication, dispositif réseau, terminal, système de communication et support de stockage
WO2025015612A1 (fr) Procédé et appareil de configuration de ressources et support de stockage
WO2025025023A1 (fr) Procédé de communication, terminal et dispositif réseau
WO2025020001A1 (fr) Procédé d'indication d'informations, terminal, dispositif réseau, système de communication et support de stockage
WO2025059951A1 (fr) Procédé de communication, terminal, dispositif de réseau et support de stockage
WO2025050326A1 (fr) Procédé de commutation de communication, terminal, dispositif de réseau et support de stockage
CN117337607A (zh) 频段切换方法、终端、网络设备以及存储介质
WO2025129531A1 (fr) Procédé de communication, premier élément de réseau, second élément de réseau et système de communication
WO2025020085A1 (fr) Procédé de communication, répéteur, dispositif de réseau et support de stockage
WO2025189403A1 (fr) Procédés et dispositifs de traitement d'informations, et support de stockage
CN117581503A (zh) 传输配置指示状态激活的处理方法、终端及网络设备
WO2025111947A1 (fr) Procédés de gestion de cycle de vie, procédé d'envoi d'indication, terminaux et dispositifs de réseau
WO2025118248A1 (fr) Procédé de traitement, appareil et support de stockage
WO2025060012A1 (fr) Procédé de traitement d'informations, dispositif et support de stockage
WO2025091388A1 (fr) Procédé et appareil de traitement, et support de stockage
WO2025156096A1 (fr) Procédé de communication, appareil et support de stockage
WO2025050319A1 (fr) Procédé de traitement de communication, terminal, dispositif réseau, système de communication et support de stockage
WO2025081460A1 (fr) Procédé d'indication, terminal, dispositif de réseau et support de stockage
WO2025175449A1 (fr) Procédés et appareils de traitement d'informations
WO2025199819A1 (fr) Procédé de traitement d'informations, système de communication et support de stockage
WO2025081435A1 (fr) Procédé de mesure, dispositif et support de stockage

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23961870

Country of ref document: EP

Kind code of ref document: A1