[go: up one dir, main page]

WO2025237559A1 - Supporting a model operation in a wireless communication system - Google Patents

Supporting a model operation in a wireless communication system

Info

Publication number
WO2025237559A1
WO2025237559A1 PCT/EP2025/056437 EP2025056437W WO2025237559A1 WO 2025237559 A1 WO2025237559 A1 WO 2025237559A1 EP 2025056437 W EP2025056437 W EP 2025056437W WO 2025237559 A1 WO2025237559 A1 WO 2025237559A1
Authority
WO
WIPO (PCT)
Prior art keywords
network entity
aimle
wireless communication
model operation
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/EP2025/056437
Other languages
French (fr)
Inventor
Emmanouil Pateromichelakis
Dimitrios Karampatsis
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo International Cooperatief UA
Original Assignee
Lenovo International Cooperatief UA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo International Cooperatief UA filed Critical Lenovo International Cooperatief UA
Publication of WO2025237559A1 publication Critical patent/WO2025237559A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/145Network analysis or design involving simulating, designing, planning or modelling of a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5058Service discovery by the service manager

Definitions

  • the present disclosure relates generally to wireless communication (or wireless communication network), including supporting a model operation in a wireless communication system.
  • a wireless communications system may include one or multiple network communication devices, which may be otherwise knowns as network equipment (NE) supporting wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE), or other suitable terminology.
  • the wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers, or the like).
  • the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G)).
  • the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on. Further, as used herein, including in the claims, a “set” may include one or more elements.
  • AD AES - Application Data Analytics Enablement Server AIML - Artificial Intelligence/Machine Learning; AIMLE - AI/ML Enablement; API - Application Programming Interface; ASP - Application Service Provider; DN - Data Network; EAS - Edge Application Server; EDN - Edge Data Network; EES - Edge Enabler Server; FL - Federated Learning; HFL - Horizontal Federated Learning; ML - Machine Learning; MNO - Mobile Network Operator; NEF - Network Exposure Function; PLMN - Public Land Mobile Network; RAT - Radio Access Technology; SEAL - Service Enabler Architecture Layer; SEALDD - SEAL Data Delivery; TL - Transfer Learning; VAL - Vertical Application Layer; VFL - Vertical Federated Learning.
  • a central network entity for wireless communication is described.
  • the central network entity may be configured to, capable of, or operable to perform one or more operations as described herein.
  • the central network entity may include at least one memory; and at least one processor coupled with the at least one memory and configured to cause the central network entity to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information.
  • a method performed by a central network entity may comprise: receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
  • a first network entity for wireless communication is described.
  • the first network entity may be configured to, capable of, or operable to perform one or more operations as described herein.
  • the first network entity may include at least one memory; and at least one processor coupled with the at least one memory and configured to cause the first network entity to: transmit, to a central network entity, a request to support a model operation according to a discovery criterion; and receive, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • a method performed by a first network entity may comprise: transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • Figure 1 illustrates an example of a wireless communications system in accordance with aspects of the present disclosure.
  • Figure 2 illustrates a diagram of an on-network AIMLE functional model in accordance with aspects of the present disclosure.
  • Figure 3 illustrates a diagram showing ML model lifecycle enablement in accordance with aspects of the present disclosure.
  • Figure 4 illustrates a diagram showing an example of a hierarchical deployment of AIMLE in accordance with aspects of the present disclosure.
  • Figure 5 illustrates a diagram showing relationships for VAL services in accordance with aspects of the present disclosure.
  • Figure 6 illustrates a diagram of a multi-operator scenario in accordance with aspects of the present disclosure.
  • Figure 7 illustrates an example of a process flow for AIMLE client discovery in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
  • Figure 8 illustrates an example of a process flow for AIMLE client selection in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
  • Figure 9 illustrates an example of a UE 900 in accordance with aspects of the present disclosure.
  • Figure 10 illustrates an example of a processor 1000 in accordance with aspects of the present disclosure.
  • Figure 11 illustrates an example of a NE 1100 in accordance with aspects of the present disclosure.
  • Figure 12 illustrates a flowchart of a method 1200 performed by a NE in accordance with aspects of the present disclosure.
  • Figure 13 illustrates a flowchart of a method 1300 performed by a NE in accordance with aspects of the present disclosure.
  • a wireless communication system including one or more UE and NE may support an AIMLE framework for integration AI/ML functionalities.
  • AIMLE tends to facilitate model operations such as model training, deployment and inference across various AI/ML participants such as AIMLE clients or VAL clients in the wireless communication system.
  • the AIMLE servers are located at different EDNs/DNs (or distributed network entities) and deployed by the same provider. Such as scenario may be referred to as a hierarchical deployment of AIMLE (or hierarchical AIMLE).
  • Hierarchical AIMLE may be performed using multiple AI/ML participants covered by an EDN in a wireless communication network.
  • the number (or quantity) of available (or fully capable) AI/ML participants covered by the EDN in the wireless communication network is insufficient to perform the model operation.
  • Examples described herein may relate to the integration of AI/ML participants across multiple wireless communication systems when the AIMLE servers are located at different EDNs. Integrating AI/ML participants across multiple wireless communication systems tends to improve hierarchical AIMLE in one wireless communication system by facilitating access to additional AI/ML participants in another wireless communication system.
  • FIG. 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure.
  • the wireless communications system 100 may include one or more NE 102, one or more UE 104, and a core network (CN) 106.
  • the wireless communications system 100 may support various radio access technologies.
  • the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE- Advanced (LIE- A) network.
  • the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network.
  • 5G-A 5G- Advanced
  • 5G-UWB 5G ultrawideband
  • the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20.
  • IEEE Institute of Electrical and Electronics Engineers
  • Wi-Fi Wi-Fi
  • WiMAX IEEE 802.16
  • IEEE 802.20 The wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc.
  • TDMA time division multiple access
  • FDMA frequency division multiple access
  • CDMA code division multiple access
  • An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area.
  • an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies.
  • an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN).
  • NTN non-terrestrial network
  • different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
  • the one or more UE 104 may be dispersed throughout a geographic region of the wireless communications system 100.
  • a UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology.
  • the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples.
  • the UE 104 may be referred to as an Internet-of-Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
  • LoT Internet-of-Things
  • LoE Internet-of-Everything
  • MTC machine-type communication
  • a UE 104 may be able to support wireless communication directly with other UEs 104 over a communication link.
  • a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link.
  • D2D device-to-device
  • the communication link may be referred to as a sidelink.
  • a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
  • An NE 102 may support communications with the CN 106, or with another NE 102, or both.
  • an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., SI, N2, N2, or network interface).
  • the NE 102 may communicate with each other directly.
  • the NE 102 may communicate with each other or indirectly (e.g., via the CN 106.
  • one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC).
  • An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
  • TRPs transmission-reception points
  • the CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions.
  • the CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)).
  • EPC evolved packet core
  • 5GC 5G core
  • MME mobility management entity
  • AMF access and mobility management functions
  • S-GW serving gateway
  • PDN gateway Packet Data Network gateway
  • UPF user plane function
  • control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
  • NAS non-access stratum
  • the CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an SI, N2, N2, or another network interface).
  • the packet data network may include an application server.
  • one or more UEs 104 may communicate with the application server.
  • a UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102.
  • the CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session).
  • the PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
  • the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications).
  • the NEs 102 and the UEs 104 may support different resource structures.
  • the NEs 102 and the UEs 104 may support different frame structures.
  • the NEs 102 and the UEs 104 may support a single frame structure.
  • the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures).
  • the NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
  • One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix.
  • a first subcarrier spacing e.g., 15 kHz
  • a normal cyclic prefix e.g. 15 kHz
  • the first subcarrier spacing e.g., 15 kHz
  • a time interval of a resource may be organized according to frames (also referred to as radio frames).
  • Each frame may have a duration, for example, a 10 millisecond (ms) duration.
  • each frame may include multiple subframes.
  • each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration.
  • each frame may have the same duration.
  • each subframe of a frame may have the same duration.
  • a time interval of a resource may be organized according to slots.
  • a subframe may include a number (e.g., quantity) of slots.
  • the number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100.
  • Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols).
  • the number (e.g., quantity) of slots for a subframe may depend on a numerology.
  • a slot For a normal cyclic prefix, a slot may include 14 symbols.
  • a slot For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols.
  • a first subcarrier spacing e.g. 15 kHz
  • an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc.
  • the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4-1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz).
  • FR1 410 MHz - 7.125 GHz
  • FR2 24.25 GHz - 52.6 GHz
  • FR3 7.125 GHz - 24.25 GHz
  • FR4 (52.6 GHz - 114.25 GHz
  • FR4a or FR4-1 52.6 GHz - 71 GHz
  • FR5 114.25 GHz - 300 GHz
  • the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands.
  • FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data).
  • FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
  • FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies).
  • FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies).
  • the AIMLE service plays role on the exposure of AI/ML services from different 3 GPP domains to the vertical/ASP in a unified manner on top of 3GPP core network and 0AM; and on defining, at a SEAL layer, value-add support services for assisting AI/ML services provided by the VAL layer, while being complementary to AI/ML support solutions provided in other 3 GPP domains.
  • Figure 2 illustrates a diagram 200 of an on-network AIMLE functional model in accordance with aspects of the present disclosure.
  • the diagram 200 comprises a UE 205, a 3GPP Network System 240, a VAL server(s) 216, ML Repository 232 and an AIMLE server 236.
  • the UE 205 comprises a VAL 210 and a SEAL 220.
  • the VAL 210 comprises a VAL Chent(s) 212.
  • the SEAL 220 comprises an AIMLE client 232.
  • the VAL server(s) 216 connects to the VAL Client(s) 212 via a VAL-UU 214.
  • the AIMLE client 232 connects to the AIMLE server 236 via an AIML-UU 234.
  • the 3GPP Network System 240 connects to the AIMLE server 236 via Network interfaces 235.
  • the AIMLE server 236 connects to the VAL server(s) 216 via an AIML-S 231.
  • the AIMLE server 236 connects to the ML Repository 232 via an AIML-R 233.
  • the AIMLE server 236 comprises an AIML-E 238.
  • the VAL client 212 communicates with the VAL server 216 over VAL-UU 214 reference point.
  • VAL-UU 214 supports both unicast and multicast delivery modes.
  • the AIML enablement functional entities on the UE 205 and the server are grouped into AIMLE client(s) AIML-C 230 and AIML enablement server(s) 231, respectively.
  • AIMLE server 231 may be a newly defined SEAL server 220 which includes of a common set of services for comprehensive enablement of AIML functionality.
  • AIMLE server 231 may define one or more of the following group of capabilities:
  • AIMLE client functional entity acts as the application client supporting AIMLE services.
  • ML repository is an entity that may serve as:
  • FIG. 3 illustrates a ML model lifecycle enablement diagram 300 in accordance with aspects of the present disclosure.
  • the ML model lifecycle enablement diagram 300 comprises a VAL server(s) 316 for ML model operational workflow and an AIMLE 330 for ML model lifecycle enablement.
  • the VAL Server(s) 316 comprises data management, model training, model evaluation, model deployment and model inference.
  • the AIMLE 330 comprises ML model related support which comprises model retrieval, model discovery and model storage.
  • the AIMLE 330 further comprises ML operation related support which comprises VFL/HFL enablement, TL enablement, split AI/ML operation, Data management support and FL member support e.g., grouping, register and events.
  • the AIMLE 330 further comprises AIMLE client support which comprises AIMLE register, discovery, participate and monitoring.
  • AIMLE 330 One role of the AIMLE 330 is ML model lifecycle enablement which provides assistance for use cases where an ASP/VAL layer aims to find and use other application entities to perform some ML operations (e.g., ML model inference) and an AIMLE server as a mediator to accomplish that.
  • ML model lifecycle enablement provides assistance for use cases where an ASP/VAL layer aims to find and use other application entities to perform some ML operations (e.g., ML model inference) and an AIMLE server as a mediator to accomplish that.
  • AIMLE 330 may undertake:
  • ML operation related support capabilities such as VFL/ HFL and TL enablement, Split AI/ML Operation support, Data management assistance, AI/ML task transfer, FL assistance in member grouping, registration and event notification (as covered in procedures in clauses 8.4, 8.6, 8.12, 8.14, 8.15-8.18 of TS 23.482 V19.0.0).
  • AIMLE client related support capabilities including AIMLE client registration, discovery, participation, monitoring, selection (as covered in procedures in clauses 8.7-8.10, 8.13 of TS 23.482 V19.0.0).
  • FIG. 4 illustrates a diagram 400 showing an example of a hierarchical deployment of AIMLE in accordance with aspects of the present disclosure.
  • Diagram 400 comprises an EDN Al 460, EDN A2 462, a Centralised DN (DNN-B) 464 and PLMN 440.
  • EDN Al 460 comprises a first EAS 454, a first EES 450 and AIMLE server #1.1 436.
  • EDN A2 464 comprises a second EAS 456, a second EES 452 and AIMLE server #1.2 437.
  • the Centralized DN 464 comprises VAL server 416 and AIMLE server #1 438 (e.g., the central AIMLE server 438).
  • PLMN 440 comprises AIMLE 1.1 - service area 466 and AIMLE 1.2 - service area 468.
  • AIMLE server #1.1 436 and AIMLE server #1.2 437 may be located at different EDNs/DNs (e.g., EDN Al 460 and EDN A2462) and can be deployed by the same provider.
  • EDNs/DNs e.g., EDN Al 460 and EDN A2462
  • Such hierarchical deployments allow the local - global ML operations (e.g., FL across domains).
  • the ML support services that the edge deployed AIMLE server correspond to the AIMLE service areas (e.g., AIMLE 1.1 - service area 466 and AIMLE 1.2 - service area), which is equivalent to the EDN service areas.
  • the central AIMLE server 438 covers all PLMN 440 area and is used to coordinate the ML related operations (e.g., FL server / aggregator) with the distributed AIMLE servers (e.g., AIMLE server #1.1 436 and AIMLE server #1.2 437).
  • Figure 5 illustrates a diagram 500 showing relationships of VAL services in accordance with aspects of the present disclosure.
  • the diagram 500 comprises a VAL user 510, a VAL service provider 512, a SEAL provider 520, a Home PLMN operator 540 and a Visited PLMN operator 542.
  • the VAL user 510 belongs to a VAL service provider 512 based on a VAL service agreement between the VAL user 510 and the VAL service provider 512.
  • the VAL service provider 512 may have VAL service agreements with several VAL users 510.
  • the VAL user 510 may have VAL service agreements with several VAL service providers 512.
  • the VAL service provider 512 and the home PLMN operator 540 may be part of the same organization, in which case the business relationship between the two is internal to a single organization.
  • the VAL service provider 512 may have SEAL provider arrangements with multiple SEAL providers 520 and the SEAL provider 520 may have PLMN operator service arrangements with multiple home PLMN operators 540.
  • the SEAL provider 520 and the VAL service provider 512 or the home PLMN operator 540 may be part of the same organization, in which case the business relationship between the two is internal to a single organization.
  • the home PLMN operator 540 may have PLMN operator service arrangements with multiple VAL service providers 512 and the VAL service provider may have PLMN operator service arrangements with multiple home PLMN operators 540.
  • PLMN subscription arrangements may be provided which allows the VAL UEs to register with the home PLMN operator 540 network.
  • the home PLMN operator 540 may have PLMN roaming agreements with multiple visited PLMN operators 542 and the visited PLMN operator 542 may have PLMN roaming agreements with multiple home PLMN operators 540.
  • Figure 6 illustrates a diagram 600 showing a multi-operator scenario in accordance with aspects of the present disclosure.
  • the diagram 600 comprises a first UE 605, a second UE 607, a first AIMLE client 632 connected to an AIMLE server 636, a second AIMLE client 634 connected to the AIMLE server 636, a first 3GPP Network 640 connected to the AIMLE server 636, a second 3 GPP network 642 connected to the AIMLE server 636 and VAL servers 616 connected to the AIMLE server 636.
  • the first 3 GPP Network 640 comprises an AIMLE Server 1.1 635.
  • the second 3GPP Network 642 comprises an AIMLE Server 1.2 637.
  • AIMLE Server 1.1 635 and AIMLE Server 1.2 637 connect to the AIMLE Server 636 (e.g., Central AIMLE Server 636).
  • AIMLE Server 636 e.g., Central AIMLE Server 636.
  • Some examples described herein relate to an application service (e.g., VAL service, ASP service, AI/ML session or service) that runs in a certain service area, which may be an edge or cloud service area or geographical area, where UEs connected to different PLMNs are present. Such service may be considered multi-operator service.
  • the VAL UEs may have AIMLE clients installed and active and connect to different AIMLE servers provided by different MNOs or trusted 3 rd parties of the MNOs.
  • the application service may be an application enablement service (e.g., AIMLE client to server) where AIMLE-UU may be over multiple MNO networks or it may be a VAL service (e.g., VAL client to server) where VAL-UU may be over multiple PLMNs.
  • a Central AIMLE server 636 connects to two PLMNs (including different RATs, e.g. 5G, 6G), whereas AIMLE server 1.1 635 and AIMLE server 1.2 637 connect to single PLMNs.
  • the first UE 605 is connected with PLMN1 and AIMLE server 1.1 635 and the second UE 607 is connected with PLMN 2 and AIMLE server 1.2 637.
  • Different VAL servers 616 may connect to the first UE 605 VAL client and different VAL server to the second UE; however, both VAL servers 616 are connected to the same AIMLE server 636.
  • an Al operation running for a V2X scenario with VAL UEs of a traffic safety or optimization service (Al-enhanced) belonging to different networks and AIMLE server 1.1 635 is provided by a service provider /MNO in an edge area, whereas AIMLE server 1.2 637 is provided by a regional network operator.
  • the AIMLE server 636 may be a cloud provider or 3 rd party VMNO connecting to both MN01 and MNO2.
  • Some examples described herein relate to a mechanism for discovering and selecting AI/ML participants (AIMLE clients, VAL clients) in an AIMLE-assisted operation (e.g., ML model training, FL, TL) in a given service area assuming a hierarchical AIMLE deployment model, where the AIMLE servers are connected to different PLMNs.
  • AIMLE clients e.g., ML model training, FL, TL
  • AIMLE servers are connected to different PLMNs.
  • Some examples described herein relate to enhancements in existing AIMLE services and AIMLE-E interfaces for supporting cross-PLMN interactions; wherein the enhancements related to receiving and processing information on the serving PLMN and allowable list of PLMNs where the VAL server/client (as consumer) may receive AIMLE service.
  • Some examples described herein relate to determining and discovering AIMLE clients / UEs which are not within the serving PLMNs of the VAL. Hence, this step enables the central AIMLE server 636 get supplementary AI/ML participants via edge deployed AIMLE servers when there is lack of availability or capability of the VAL UEs which are connected to the serving PLMN.
  • Some examples described herein relate to the enhancements to the interactions among AIMLE servers for supporting AIMLE client discovery in multi-PLMN scenarios. Some examples described herein relate to the enhancements to the interactions among AIMLE servers for supporting AIML participant selection in multi-PLMN scenarios.
  • the allowed PLMN may be the PLMN that the UE can roam or connected via dual connectivity (registered but not connected to the target PLMN).
  • the allowed PLMNs correspond to the PLMNs that the Server (AIMLE service provider or VAL provider or ASP) has service agreements; however, there are not active connections to VAL UEs or there is no network service consumed by the Server from the other PLMNs at the allowed list.
  • the preferred PLMN is part of the allowed, but the VAL UE has certain priority due to better subscription from the service agreement between the ASP / SEAL provider and the preferred PLMN.
  • the serving PLMN corresponds to the PLMN providing network services and is used for supporting the communication to the AIMLE clients / VAL clients (at the corresponding VAL UEs).
  • the AIMLE server Based on VAL server request, the AIMLE server initially checks both PLMNs for best AIMLE clients and discovers/selects. Based on VAL server request and its preference for a specific PLMN for communicating the service, the AIMLE server checks the primary/preferred PLMN first and then get supplementary clients from other PLMNs if needed.
  • the AI/ML operation which requires cross-PLMN support may also expand to support VFL scenarios where the server may find clients from multiple PLMNs in order to train the model. The requirements may include a PLMN list and required number of common samples to be supported by each FL client.
  • FIG. 7 illustrates an example of a process flow 700 for AIMLE client discovery in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
  • the process flow 700 may implement or be implemented by aspects of the wireless communication system 100.
  • the process flow 700 may include VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737, which may be one or more examples of devices described herein with reference to Figure 1.
  • the process flow 700 may be referred to as a procedure, including one or more operations performed by one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737.
  • the operations or signalling performed between one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737 may be performed or signalled (e.g., transmitted, received) in a different order than the example order shown, or the operations or signalling performed by one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737 may be performed or signalled (e.g., transmitted, received) in different orders or at different times.
  • Some operations or signalling may also be omitted from the process flow 700. Additionally, although some operations or signalling may be shown to occur at different times, these operations or signalling may occur at the same time or in overlapping time periods.
  • VAL servers 716 may discover suitable AIMLE clients to fulfil the requirements for the AI/ML application. The VAL server 716 may then use the discovered AIMLE clients to select a set of AIMLE clients to perform AI/ML operations.
  • the following enhancements are provided (over AIMLE-S or SEAL-X interface if the AIMLE consumer is another SEAL server, e.g., AD AES).
  • AIMLE clients that support AI/ML operations have registered with the corresponding AIMLE server (e.g., AIMLE server 1.1 735 or AIMLE server 1.2 737) and included their AIMLE client profiles and optionally a list of supported services.
  • the central AIMLE server may access a ML repository to obtain AIMLE client profiles and supported services associated with AIMLE clients.
  • the VAL server 716 sends an AIMLE client discovery request to a central AIMLE server 736 to discover a list of AIMLE clients that are available to participate in AI/ML operations (e.g., is available and have required data to train an ML model).
  • the request may also include AIML client task capability requirements to discover clients who can perform the AIML tasks like AIML model training/offload/split with compute requirements and task performance preference like green task performance.
  • This request may also include information on the serving and/or preferred PLMN for the communication of the VAL service, as well as the allowed MNO info for other MNO network from which the AIMLE consumer / VAL server can discover AIMLE clients / VAL UEs.
  • Table 1 describes information elements that may be included in the AIMLE client discovery request.
  • Table 2 described information elements that may be included in the AIMLE client discovery criteria.
  • Table 1 AIMLE client discovery request describes information elements that may be included in the AIMLE client discovery request.
  • the central AIMLE server 736 performs authentication, and authorization checks to determine if the requestor is able to discover AIMLE clients for the VAL UEs connected to the serving PLMN.
  • the central AIMLE server 736 determines to discover and discovers the AIMLE server(s) corresponding to the target PLMNs as provided in the allowed MNO info and serving PLMN.
  • the central AIMLE server 736 sends an AIMLE client discovery request to the AIMLE Server 1.1 735 (of serving PLMN), where the request includes: the requestor VAL server ID and address, the service ID and profile/requirements, the capability requirements and discovery filter criteria (which can be a subset of parameters of Table 8.8.3.1-2 of TS 23.482 with the addition of the new parameters as in step 771).
  • the central AIMLE server 736 receives an AIMLE client discovery response from the AIMLE Server 1.1 735 (of serving PLMN), where the response includes the discovered AIMLE client IDs and addresses connected to the AIMLE server 1.1 (of PLMN 1 ) in the given service area.
  • the central AIMLE server 736 identifies that the number of AIMLE clients are not sufficient for the AI/ML operation / AIMLE service or that more AIMLE clients are required based on the VAL requirements, and triggers to discover AIMLE clients from AIMLE Server 1.2 (PLMN2).
  • step 776 the central AIMLE server 736 repeats steps 773-774 for the AIMLE Server 1.2 737 for fetching the discovered AIMLE client of AIMLE server 1.2 737.
  • the central AIMLE server 736 sends an AIMLE client discovery response which may include the cross-PLMN information with the PLMN ID/name from which the discovered VAL UEs were provided. If the required number of AIMLE clients are not included in the response message from the central AIMLE server 736, then the AIML service consumer may discover the remaining required AIMLE clients directly from the corresponding AIMLE server 1.1 735 or AIMLE server 1.2 737 via providing the alternative/target/destination PLMN or based on the AIMLE server ID/address indicating the required number of AIMLE clients from the destination PLMN.
  • Table 3 describes the information elements that may be included in the AIMLE client discovery response.
  • Figure 8 illustrates an example of a process flow 800 for AIMLE client selection in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
  • FIG. 8 illustrates an example of a process flow 800 in accordance with aspects of the present disclosure.
  • the process flow 800 may implement or be implemented by aspects of the wireless communication system 800.
  • the process flow 800 may include VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832, which may be one or more examples of devices described herein with reference to Figure 1.
  • the process flow 800 may be referred to as a procedure, including one or more operations performed by one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832.
  • the operations or signalling performed between one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832 may be performed or signalled (e.g., transmitted, received) in a different order than the example order shown, or the operations or signalling performed by one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832 may be performed or signalled (e.g., transmitted, received) in different orders or at different times. Some operations or signalling may also be omitted from the process flow 800. Additionally, although some operations or signalling may be shown to occur at different times, these operations or signalling may occur at the same time or in overlapping time periods.
  • Process flow 800 focuses on the interaction among AIMLE servers (central and PLMN-specific).
  • AIMLE clients that support AI/ML operations may have registered with the central AIMLE server 836 and included their AIMLE client profiles and optionally a list of supported services.
  • the central AIMLE server 836 may access the ML repository 832 to obtain AIMLE client profiles and supported services associated with the AIMLE clients.
  • the VAL server 816 e.g., AIMLE consumer
  • the VAL server 816 sends an AIMLE client selection request to AIMLE server #1.1 835 to select AIMLE clients available for participation in AI/ML operations (e.g., is available and have required data to train an ML model) based on the discovery procedure.
  • Table 4 describes the information elements that may be included in the AIMLE client selection request.
  • the AIMLE server #1.1 835 performs authentication and authorization checks to determine if the requestor is able to select AIMLE clients.
  • the AIMLE server #1.1 835 based on the request, evaluates whether the AIMLE clients are capable of acting as ML/FL participants for the AI/ML operation (AIMLE or VAL service or analytics service).
  • interaction with the ML repository 832 may be used to identify the availability status of the AIMLE clients. If more AIMLE clients are needed, the AIMLE server 1.1 835 determines to request assistance from other AIMLE server of other MNOs (e.g., AIMLE Server 1.2 837).
  • the AIMLE server #1.1 835 requests from the central AIMLE server 836 to discover and fetch information on other AIMLE servers covering the same service area for which the request applies.
  • Such request can be in form of a trigger event, where the central AIMLE server 836 is expected to determine an action for discovering additional AIMLE clients from other AIMLE servers.
  • Such request also includes the allowed MNO info for the VAL server 816 (e.g., to identify allowed PLMN-specific AIMLE servers).
  • the ML repository 832 may keep track of the AIMLE client or more generally ML/FL candidate members which are matched per PLMN, the central AIMLE server 836 fetches the list of available FL/ML members (and/or VAL UEs) for the requested PLMN ID.
  • step 875 alternatively or complementary to Step 874, the central AIMLE server 836 queries the availability of AIMLE clients to AIMLE server#1.2 837 for the AI/ML operation (AIMLE or VAL service or analytics service).
  • AIMLE or VAL service or analytics service the AI/ML operation
  • step 876 based on the discovered AIMLE clients / ML or FL members from AIMLE server 1.2 837 (directly or via the ML repository 832), the central AIMLE server 836 selects which entities are going to act as ML/FL members for the AI/ML operation (e.g. FL, ML model inference, training). Such selection is based on the permissions/capabilities and authorizations/agreements of the VAL applications with the additional MNO(s). Also, such selection may be based on different factors such as a rating or weights for the given operation (AIMLE clients of different MNO can have less rating given the permissions/limitations etc).
  • the central AIMLE server 836 sends the selected AIMLE clients of AIMLE server #1.2 837 to the AIMLE server #1.1 835, where the message includes the list of AIMLE client IDs/addresses as well as the AIMLE server #1.2 ID and address as well as cross-PLMN information and permissions/capabilities/limitations corresponding to the selected AIMLE clients of the second AIMLE server 837.
  • This message may also include the service area of AIMLE server #1.2 837 (e.g., topological or geographical or EDN area) in case there is a partial overlapping of coverages between AIMLE server #1.1 835 and AIMLE server #1.2.
  • AIMLE server #1.2 837 e.g., topological or geographical or EDN area
  • Such information may be also provided in earlier steps (e.g., step 874 based on ML repository info, step 876 as factor for selecting an AIMLE client over others from other AIMLE servers)
  • the processor 902, the memory 904, the controller 906, or the transceiver 908, or various combinations or components thereof may be implemented in hardware (e.g., circuitry).
  • the hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • the processor 902 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 902 may be configured to operate the memory 904. In some other implementations, the memory 904 may be integrated into the processor 902. The processor 902 may be configured to execute computer-readable instructions stored in the memory 904 to cause the UE 900 to perform various functions of the present disclosure.
  • an intelligent hardware device e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof.
  • the processor 902 may be configured to operate the memory 904. In some other implementations, the memory 904 may be integrated into the processor 902.
  • the processor 902 may be configured to execute computer-readable instructions stored in the memory 904 to cause the UE 900 to perform various functions of the present disclosure.
  • the memory 904 may include volatile or non-volatile memory.
  • the memory 904 may store computer-readable, computer-executable code including instructions when executed by the processor 902 cause the UE 900 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such the memory 904 or another type of memory.
  • Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
  • the processor 902 and the memory 904 coupled with the processor 902 may be configured to cause the UE 900 to perform one or more of the functions described herein (e.g., executing, by the processor 902, instructions stored in the memory 904).
  • the processor 902 may support wireless communication at the UE 900 in accordance with examples as disclosed herein.
  • the UE 900 may be configured to support the arrangements described herein.
  • the controller 906 may manage input and output signals for the UE 900.
  • the controller 906 may also manage peripherals not integrated into the UE 900.
  • the controller 906 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems.
  • the controller 906 may be implemented as part of the processor 902.
  • the UE 900 may include at least one transceiver 908. In some other implementations, the UE 900 may have more than one transceiver 908.
  • the transceiver 908 may represent a wireless transceiver.
  • the transceiver 908 may include one or more receiver chains 910, one or more transmitter chains 912, or a combination thereof.
  • a receiver chain 910 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receiver chain 910 may include one or more antennas for receive the signal over the air or wireless medium.
  • the receiver chain 910 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal.
  • the receiver chain 910 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receiver chain 910 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • a transmitter chain 912 may be configured to generate and transmit signals (e.g., control information, data, packets).
  • the transmitter chain 912 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM).
  • the transmitter chain 912 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmitter chain 912 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
  • the processor 1000 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein.
  • a protocol stack e.g., a software stack
  • operations e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading
  • the processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 1000) or other memory (e.g., random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others).
  • RAM random access memory
  • ROM read-only memory
  • DRAM dynamic RAM
  • SDRAM synchronous dynamic RAM
  • SRAM static RAM
  • FeRAM ferroelectric RAM
  • MRAM magnetic RAM
  • RRAM resistive RAM
  • flash memory phase change memory
  • PCM phase change memory
  • the controller 1002 may be configured to manage and coordinate various operations (e.g., signalling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 1000 to cause the processor 1000 to support various operations in accordance with examples as described herein.
  • the controller 1002 may operate as a control unit of the processor 1000, generating control signals that manage the operation of various components of the processor 1000. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
  • the one or more ALUs 1006 may be configured to support various operations in accordance with examples as described herein.
  • the one or more ALUs 1006 may reside within or on a processor chipset (e.g., the processor 1000).
  • the one or more ALUs 1006 may reside external to the processor chipset (e.g., the processor 1000).
  • One or more ALUs 1006 may perform one or more computations such as addition, subtraction, multiplication, and division on data.
  • one or more ALUs 1006 may receive input operands and an operation code, which determines an operation to be executed.
  • One or more ALUs 1006 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 1006 may support logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND), enabling the one or more ALUs 1006 to handle conditional operations, comparisons, and bitwise operations.
  • logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND)
  • the processor 1000 may support wireless communication in accordance with examples as disclosed herein.
  • the processor 100 may be configured to or operable to support a means for receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
  • the processor 1000 may be configured to or operable to support a means for transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • FIG 11 illustrates an example of a NE 1100 in accordance with aspects of the present disclosure.
  • the NE 1100 may include a processor 1102, a memory 1104, a controller 1106, and a transceiver 1108.
  • the processor 1102, the memory 1104, the controller 1106, or the transceiver 1108, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
  • the processor 1102, the memory 1104, the controller 1106, or the transceiver 1108, or various combinations or components thereof may be implemented in hardware (e.g., circuitry).
  • the hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • the processor 1102 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 1102 may be configured to operate the memory 1104. In some other implementations, the memory 1104 may be integrated into the processor 1102. The processor 1102 may be configured to execute computer-readable instructions stored in the memory 1104 to cause the NE 1100 to perform various functions of the present disclosure.
  • an intelligent hardware device e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof.
  • the processor 1102 may be configured to operate the memory 1104. In some other implementations, the memory 1104 may be integrated into the processor 1102.
  • the processor 1102 may be configured to execute computer-readable instructions stored in the memory 1104 to cause the NE 1100 to perform various functions of the present disclosure.
  • the memory 1104 may include volatile or non-volatile memory.
  • the memory 1104 may store computer-readable, computer-executable code including instructions when executed by the processor 1102 cause the NE 1100 to perform various functions described herein.
  • the code may be stored in a non-transitory computer-readable medium such the memory 1104 or another type of memory.
  • Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • a non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
  • the processor 1102 and the memory 1104 coupled with the processor 1102 may be configured to cause the NE 1100 to perform one or more of the functions described herein (e.g., executing, by the processor 1102, instructions stored in the memory 1104).
  • the processor 1102 may support wireless communication at the NE 1100 in accordance with examples as disclosed herein.
  • the NE 1100 may be configured to support a means for receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
  • the NE 1100 may be configured to or operable to support a means for transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • the controller 1106 may manage input and output signals for the NE 1100.
  • the controller 1106 may also manage peripherals not integrated into the NE 1100.
  • the controller 1106 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems.
  • the controller 1106 may be implemented as part of the processor 1102.
  • the NE 1100 may include at least one transceiver 1108. In some other implementations, the NE 1100 may have more than one transceiver 1108.
  • the transceiver 1108 may represent a wireless transceiver.
  • the transceiver 1108 may include one or more receiver chains 1110, one or more transmitter chains 1112, or a combination thereof.
  • a receiver chain 1110 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium.
  • the receiver chain 1110 may include one or more antennas for receive the signal over the air or wireless medium.
  • the receiver chain 1110 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal.
  • the receiver chain 1110 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal.
  • the receiver chain 1110 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
  • a transmitter chain 1112 may be configured to generate and transmit signals (e.g., control information, data, packets).
  • the transmitter chain 1112 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium.
  • the at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM).
  • the transmitter chain 1112 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium.
  • the transmitter chain 1112 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
  • Figure 12 illustrates a flowchart of a method 1200 in accordance with aspects of the present disclosure.
  • the operations of the method may be implemented by a NE as described herein.
  • the NE may execute a set of instructions to control the function elements of the NE to perform the described functions.
  • the method 1200 may include receiving, from a first network entity, a request to support a model operation according to a discovery criterion.
  • the operations of 1202 may be performed in accordance with examples as described herein.
  • aspects of the operations of 1202 may be performed by a NE as described with reference to Figure 11.
  • the method 1200 may include determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion.
  • the operations of 1204 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1204 may be performed by a NE as described with reference to Figure 11.
  • the method 1200 may include determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • the operations of 1206 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1206 may be performed a NE as described with reference to Figure 11.
  • the method may include transmitting, to the first network entity, a response message comprising the first information and the second information.
  • the operations of 1208 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1208 may be performed by a NE as described with reference to Figure 11.
  • Figure 13 illustrates a flowchart of a method 1300 in accordance with aspects of the present disclosure.
  • the operations of the method 1300 may be implemented by a NE as described herein.
  • the NE may execute a set of instructions to control the function elements of the NE to perform the described functions.
  • the method 1300 may include transmitting, to a central network entity, a request to support a model operation according to a discovery criterion.
  • the operations of 1302 may be performed in accordance with examples as described herein.
  • aspects of the operations of 1302 may be performed by a NE as described with reference to Figure 11.
  • the method 1300 may include receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • the operations of 1304 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1304 may be performed by a NE as described with reference to Figure 11.
  • a central network entity for wireless communication comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the central network entity to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information.
  • Such a central network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
  • the central network entity may be a central AIMLE server.
  • the first distributed network entity may be a first distributed AIMLE server.
  • the first distributed network entity may be part of a first PLMN.
  • the first wireless communication network may be the first PLMN.
  • the second distributed network entity may be part of a second PLMN.
  • the second wireless communication network may be the second PLMN.
  • the second distributed network entity may be a second distributed AIMLE server.
  • the first network entity may be a VAL server.
  • the model operation may be an Al model operation.
  • the model operation may be an application service.
  • the model operation may be a VAL service.
  • the model operation may be an ASP service.
  • the model operation may be an AI/ML session.
  • the model operation may be an AI/ML service.
  • the model operation may be an ML model operation.
  • the model operation may be an AI/ML model operation.
  • the model operation may be an operation of a model.
  • the model may be an AI/ML model.
  • the model operation may be an AIMLE-assisted operation.
  • the discovery criterion may be an AIMLE client discovery criteria.
  • the request to support the model operation may be an AIMLE client discovery request.
  • the response message may be an AIMLE client discovery response.
  • the first node may be a first user equipment.
  • the first node may be connected to the first wireless communication network.
  • the second node may be a second user equipment.
  • the second node may be connected to the second wireless communication network.
  • the first wireless communication network may be a first PLMN.
  • the first wireless communication network may be a first NPN.
  • the second wireless communication network may be a second PLMN.
  • the second wireless communication network may be a second NPN.
  • the first information corresponding to the first node for supporting at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 3 above.
  • the first information corresponding to the first node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 5 above.
  • the second information corresponding to the second node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 3 above.
  • the second information corresponding to the second node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 5 above.
  • the model operation may be a ML task or ML model task.
  • the model operation may be at least one of: a ML model training operation, a ML model inference operation, a FL training operation, a FL inference operation, a VFL or HFL operation, a Transfer Learning operation, a ML model lifecycle operation, a ML model pipeline and a workflow operation.
  • the discovery criterion may be at least one of the information elements described in Table 2 above.
  • the discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
  • the service area identifier may correspond to a service area.
  • the service area may be an ASP coverage area.
  • the service area may be an edge coverage area.
  • the service area may be a cloud coverage area.
  • the service area may comprise the first wireless communication network.
  • the service area may comprise the second wireless communication network.
  • the service area may correspond to the coverage of the serving wireless communication network.
  • the allowed mobile network operator may permit subscribers for consuming an AIMLE service.
  • the allowed mobile network operator may be an MNO name.
  • the allowed mobile network operator may be a PLMN ID.
  • the allowed mobile network operator may be an operator of the first wireless communication network.
  • the allowed mobile network operator may be an operator of the second wireless communication network.
  • the information for accessing the first wireless communication network may comprise a permission for accessing the first wireless communication network.
  • the information for accessing the first wireless communication network may comprise an authorization for accessing the first wireless communication network.
  • the information for accessing the first wireless communication network may comprise a limitation on a capability of an entity connected to the first wireless communication network.
  • the information for accessing the second wireless communication network may comprise a permission for accessing the second wireless communication network.
  • the information for accessing the second wireless communication network may comprise an authorization for accessing the second wireless communication network.
  • the information for accessing the second wireless communication network may comprise a limitation on a capability of an entity connected to the second wireless communication network.
  • the at least one processor may be further configured to cause the central network entity to: receive, from a second network entity, third information corresponding to the first distributed network entity or fourth information corresponding to the second distributed network entity.
  • the second network entity may comprise a repository.
  • the second network entity may comprise a common API framework core function.
  • the at least one processor may be further configured to cause the central network entity to: determine a quantity of distributed network entities to support the model operation according to the discovery criterion.
  • the quantity of distributed network entities may be based on a performance requirement of the model operation.
  • the performance requirement of the model operation may be based on a quantity of nodes supporting the model operation.
  • the at least one processor may be further configured to cause the central network entity to: receive, from the first distributed network entity, an indication of a request for an availability or capability of the second node to support the model operation.
  • the indication of the request for the availability of the second node to support the model operation may be an AIMLE client selection request.
  • the at least one processor may be further configured to cause the central network entity to: transmit, to the first distributed network entity, an indication of the availability or the capability of the second node to support the model operation.
  • the indication of the availability of the second node to support the model operation may be a AIMLE client selection response.
  • the at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function; a cloud platform function, and an application function.
  • the at least one processor may be further configured to cause the central network entity to: identify the first distributed network entity or the second distributed network entity based on the discovery criterion.
  • the at least one processor being configured to cause the central network entity to identify the first distributed network entity or the second distributed network entity may comprise the at least one processor being further configured to cause the central network entity to: obtain an identity of the first distributed network entity or an identify of the second distributed network entity from a machine learning repository.
  • a method performed or performable by a central network entity comprising: receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
  • Such a method performed or performable by the central network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
  • the discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
  • the method may further comprise receiving, from a second network entity, third information corresponding to the first distributed network entity or fourth information corresponding to the second distributed network entity.
  • the method may further comprise determining a quantity of distributed network entities to support the model operation according to the discovery criterion.
  • the quantity of distributed network entities may be based on a performance requirement of the model operation.
  • the performance requirement of the model operation may be based on a quantity of nodes supporting the model operation.
  • the method may further comprise receiving, from the first distributed network entity, an indication of a request for an availability or capability of the second node to support the model operation.
  • the method may further comprise transmitting, to the first distributed network entity, an indication of the availability or the capability of the second node to support the model operation.
  • the at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function; a cloud platform function, and an application function.
  • a first network entity for wireless communication comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the first network entity to: transmit, to a central network entity, a request to support a model operation according to a discovery criterion; and receive, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • Such a first network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
  • the discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
  • the at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function, a cloud platform function and an application function.
  • a method performed or performable by a first network entity comprising: transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
  • a central network entity for wireless communication is described.
  • the central network entity may be configured to, capable of, or operable to perform one or more operations as described herein.
  • a method performed or performable by the central network entity is described herein.
  • a processor for wireless communication is described.
  • the processor may be configured to, capable of, or operable to perform one or more operations as described herein.
  • a first network entity for wireless communication is described.
  • the first network entity may be configured to, capable of, or operable to perform one or more operations as described herein.
  • a method performed or performable by the first network entity is described herein.
  • a processor for wireless communication is described.
  • the processor may be configured to, capable of, or operable to perform one or more operations as described herein.
  • Examples described herein may relate to how to enable the discovery selection of AIMLE clients in a target area covered by one or more edge service areas operated by different PLMNs by a central/cloud deployed AIMLE server, where there is no direct access with the VAL UEs per PLMN.
  • Examples described herein may relate to providing a method for discovering and selecting AI/ML participants (AIMLE clients, VAL clients) in an AIMLE-assisted operation (e.g. ML model training, FL, TL) in a given service area assuming a hierarchical AIMLE deployment model, where the AIMLE servers are connected to different PLMNs.
  • AIMLE clients e.g. ML model training, FL, TL
  • Previous solutions do not consider multi-operator aspects, and related interaction among AI/ML servers in cloud / edge and core networks.
  • an application enablement function There is also provided a method at a first entity for controlling the operation of an ML task, wherein the ML task is communicated via a plurality of wireless communication networks, the method comprising: Receiving a requirement for discovering a plurality of nodes for operating an ML task, wherein the plurality of nodes is connected to a plurality of wireless communication networks; Determining a set of application entities, each entity is associated with a wireless communication network from the plurality of networks, wherein each application entity is configured to support part of the ML task for the corresponding wireless communication network; Sending a request to the corresponding application entity to discover the nodes connected each of the associated wireless communication network; and Receiving information from the corresponding application entities, wherein the information comprises the identity of the least one node associated with at least one from the plurality of wireless communication networks.
  • the method may further comprise sending information on the discovered nodes to a second application.
  • the set of application entities may be application enablement entities, SEAL entities, analytics functions, application functions.
  • the requirement may comprise information on the allowed mobile network operator networks, permissions and/or authorizations when accessing each of the wireless communication networks, information on the serving PLMN, information on a preferred PLMN by the first application, a service area corresponding to the coverage of the service PLMN, or a combination thereof.
  • the method may further comprise discovering the set of application entities via fetching information from a repository, a common API framework core function or a combination thereof.
  • the method may further comprise selecting the minimum required application entities from the determined set of application entities based on the requirement; and in particular the serving and/or preferred network associated with the second application. Selecting the minimum required application entities may be based on the performance requirement of the ML task, wherein the performance requirement is based on the minimum number of nodes participating in the ML task.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Various aspects of the present disclosure relate to a central network entity for wireless communication. The central network entity may be configured to, capable of, or operable to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information.

Description

SUPPORTING A MODEL OPERATION IN A WIRELESS COMMUNICATION SYSTEM
TECHNICAL FIELD
[0001] The present disclosure relates generally to wireless communication (or wireless communication network), including supporting a model operation in a wireless communication system.
BACKGROUND
[0002] A wireless communications system may include one or multiple network communication devices, which may be otherwise knowns as network equipment (NE) supporting wireless communications for one or multiple user communication devices, which may be otherwise known as user equipment (UE), or other suitable terminology. The wireless communications system may support wireless communications with one or multiple user communication devices by utilizing resources of the wireless communication system (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers, or the like). Additionally, the wireless communications system may support wireless communications across various radio access technologies including third generation (3G) radio access technology, fourth generation (4G) radio access technology, fifth generation (5G) radio access technology, among other suitable radio access technologies beyond 5G (e.g., sixth generation (6G)).
SUMMARY
[0003] An article “a” before an element is unrestricted and understood to refer to “at least one” of those elements or “one or more” of those elements. The terms “a,” “at least one,” “one or more,” and “at least one of one or more” may be interchangeable. As used herein, including in the claims, “or” as used in a list of items (e.g., a list of items prefaced by a phrase such as “at least one of’ or “one or more of’ or “one or both of’) indicates an inclusive list such that, for example, a list of at least one of A, B, or C means A or B or C or AB or AC or BC or ABC (i.e., A and B and C). Also, as used herein, the phrase “based on” shall not be construed as a reference to a closed set of conditions. For example, an example step that is described as “based on condition A” may be based on both a condition A and a condition B without departing from the scope of the present disclosure. In other words, as used herein, the phrase “based on” shall be construed in the same manner as the phrase “based at least in part on. Further, as used herein, including in the claims, a “set” may include one or more elements.
[0004] The following abbreviations are herewith defined, at least some of which are referred to within the following description: AD AES - Application Data Analytics Enablement Server; AIML - Artificial Intelligence/Machine Learning; AIMLE - AI/ML Enablement; API - Application Programming Interface; ASP - Application Service Provider; DN - Data Network; EAS - Edge Application Server; EDN - Edge Data Network; EES - Edge Enabler Server; FL - Federated Learning; HFL - Horizontal Federated Learning; ML - Machine Learning; MNO - Mobile Network Operator; NEF - Network Exposure Function; PLMN - Public Land Mobile Network; RAT - Radio Access Technology; SEAL - Service Enabler Architecture Layer; SEALDD - SEAL Data Delivery; TL - Transfer Learning; VAL - Vertical Application Layer; VFL - Vertical Federated Learning.
[0005] A central network entity for wireless communication is described. The central network entity may be configured to, capable of, or operable to perform one or more operations as described herein. For example, the central network entity may include at least one memory; and at least one processor coupled with the at least one memory and configured to cause the central network entity to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information. [0006] A method performed by a central network entity is described. The method may comprise: receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
[0007] A first network entity for wireless communication is described. The first network entity may be configured to, capable of, or operable to perform one or more operations as described herein. For example, the first network entity may include at least one memory; and at least one processor coupled with the at least one memory and configured to cause the first network entity to: transmit, to a central network entity, a request to support a model operation according to a discovery criterion; and receive, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
[0008] A method performed by a first network entity is described. The method may comprise: transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion. BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Figure 1 illustrates an example of a wireless communications system in accordance with aspects of the present disclosure.
[0010] Figure 2 illustrates a diagram of an on-network AIMLE functional model in accordance with aspects of the present disclosure.
[0011] Figure 3 illustrates a diagram showing ML model lifecycle enablement in accordance with aspects of the present disclosure.
[0012] Figure 4 illustrates a diagram showing an example of a hierarchical deployment of AIMLE in accordance with aspects of the present disclosure.
[0013] Figure 5 illustrates a diagram showing relationships for VAL services in accordance with aspects of the present disclosure.
[0014] Figure 6 illustrates a diagram of a multi-operator scenario in accordance with aspects of the present disclosure.
[0015] Figure 7 illustrates an example of a process flow for AIMLE client discovery in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
[0016] Figure 8 illustrates an example of a process flow for AIMLE client selection in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
[0017] Figure 9 illustrates an example of a UE 900 in accordance with aspects of the present disclosure.
[0018] Figure 10 illustrates an example of a processor 1000 in accordance with aspects of the present disclosure.
[0019] Figure 11 illustrates an example of a NE 1100 in accordance with aspects of the present disclosure.
[0020] Figure 12 illustrates a flowchart of a method 1200 performed by a NE in accordance with aspects of the present disclosure. [0021] Figure 13 illustrates a flowchart of a method 1300 performed by a NE in accordance with aspects of the present disclosure.
DETAILED DESCRIPTION
[0022] A wireless communication system (or wireless communication network), including one or more UE and NE may support an AIMLE framework for integration AI/ML functionalities. AIMLE tends to facilitate model operations such as model training, deployment and inference across various AI/ML participants such as AIMLE clients or VAL clients in the wireless communication system. In some examples described herein, the AIMLE servers are located at different EDNs/DNs (or distributed network entities) and deployed by the same provider. Such as scenario may be referred to as a hierarchical deployment of AIMLE (or hierarchical AIMLE).
[0023] Hierarchical AIMLE may be performed using multiple AI/ML participants covered by an EDN in a wireless communication network. However, there may be scenarios in which the number (or quantity) of available (or fully capable) AI/ML participants covered by the EDN in the wireless communication network is insufficient to perform the model operation. Examples described herein may relate to the integration of AI/ML participants across multiple wireless communication systems when the AIMLE servers are located at different EDNs. Integrating AI/ML participants across multiple wireless communication systems tends to improve hierarchical AIMLE in one wireless communication system by facilitating access to additional AI/ML participants in another wireless communication system.
[0024] Aspects of the present disclosure are described in the context of a wireless communications system.
[0025] Figure 1 illustrates an example of a wireless communications system 100 in accordance with aspects of the present disclosure. The wireless communications system 100 may include one or more NE 102, one or more UE 104, and a core network (CN) 106. The wireless communications system 100 may support various radio access technologies. In some implementations, the wireless communications system 100 may be a 4G network, such as an LTE network or an LTE- Advanced (LIE- A) network. In some other implementations, the wireless communications system 100 may be a NR network, such as a 5G network, a 5G- Advanced (5G-A) network, or a 5G ultrawideband (5G-UWB) network. In other implementations, the wireless communications system 100 may be a combination of a 4G network and a 5G network, or other suitable radio access technology including Institute of Electrical and Electronics Engineers (IEEE) 802.11 (Wi-Fi), IEEE 802.16 (WiMAX), IEEE 802.20. The wireless communications system 100 may support radio access technologies beyond 5G, for example, 6G. Additionally, the wireless communications system 100 may support technologies, such as time division multiple access (TDMA), frequency division multiple access (FDMA), or code division multiple access (CDMA), etc.
[0026] The one or more NE 102 may be dispersed throughout a geographic region to form the wireless communications system 100. One or more of the NE 102 described herein may be or include or may be referred to as a network node, a base station, a network element, a network function, a network entity, a radio access network (RAN), a NodeB, an eNodeB (eNB), a next-generation NodeB (gNB), or other suitable terminology. An NE 102 and a UE 104 may communicate via a communication link, which may be a wireless or wired connection. For example, an NE 102 and a UE 104 may perform wireless communication (e.g., receive signalling, transmit signalling) over a Uu interface.
[0027] An NE 102 may provide a geographic coverage area for which the NE 102 may support services for one or more UEs 104 within the geographic coverage area. For example, an NE 102 and a UE 104 may support wireless communication of signals related to services (e.g., voice, video, packet data, messaging, broadcast, etc.) according to one or multiple radio access technologies. In some implementations, an NE 102 may be moveable, for example, a satellite associated with a non-terrestrial network (NTN). In some implementations, different geographic coverage areas associated with the same or different radio access technologies may overlap, but the different geographic coverage areas may be associated with different NE 102.
[0028] The one or more UE 104 may be dispersed throughout a geographic region of the wireless communications system 100. A UE 104 may include or may be referred to as a remote unit, a mobile device, a wireless device, a remote device, a subscriber device, a transmitter device, a receiver device, or some other suitable terminology. In some implementations, the UE 104 may be referred to as a unit, a station, a terminal, or a client, among other examples. Additionally, or alternatively, the UE 104 may be referred to as an Internet-of-Things (loT) device, an Internet-of-Everything (loE) device, or machine-type communication (MTC) device, among other examples.
[0029] A UE 104 may be able to support wireless communication directly with other UEs 104 over a communication link. For example, a UE 104 may support wireless communication directly with another UE 104 over a device-to-device (D2D) communication link. In some implementations, such as vehicle-to-vehicle (V2V) deployments, vehicle-to-everything (V2X) deployments, or cellular-V2X deployments, the communication link may be referred to as a sidelink. For example, a UE 104 may support wireless communication directly with another UE 104 over a PC5 interface.
[0030] An NE 102 may support communications with the CN 106, or with another NE 102, or both. For example, an NE 102 may interface with other NE 102 or the CN 106 through one or more backhaul links (e.g., SI, N2, N2, or network interface). In some implementations, the NE 102 may communicate with each other directly. In some other implementations, the NE 102 may communicate with each other or indirectly (e.g., via the CN 106. In some implementations, one or more NE 102 may include subcomponents, such as an access network entity, which may be an example of an access node controller (ANC). An ANC may communicate with the one or more UEs 104 through one or more other access network transmission entities, which may be referred to as a radio heads, smart radio heads, or transmission-reception points (TRPs).
[0031] The CN 106 may support user authentication, access authorization, tracking, connectivity, and other access, routing, or mobility functions. The CN 106 may be an evolved packet core (EPC), or a 5G core (5GC), which may include a control plane entity that manages access and mobility (e.g., a mobility management entity (MME), an access and mobility management functions (AMF)) and a user plane entity that routes packets or interconnects to external networks (e.g., a serving gateway (S-GW), a Packet Data Network (PDN) gateway (P-GW), or a user plane function (UPF)). In some implementations, the control plane entity may manage non-access stratum (NAS) functions, such as mobility, authentication, and bearer management (e.g., data bearers, signal bearers, etc.) for the one or more UEs 104 served by the one or more NE 102 associated with the CN 106.
[0032] The CN 106 may communicate with a packet data network over one or more backhaul links (e.g., via an SI, N2, N2, or another network interface). The packet data network may include an application server. In some implementations, one or more UEs 104 may communicate with the application server. A UE 104 may establish a session (e.g., a protocol data unit (PDU) session, or the like) with the CN 106 via an NE 102. The CN 106 may route traffic (e.g., control information, data, and the like) between the UE 104 and the application server using the established session (e.g., the established PDU session). The PDU session may be an example of a logical connection between the UE 104 and the CN 106 (e.g., one or more network functions of the CN 106).
[0033] In the wireless communications system 100, the NEs 102 and the UEs 104 may use resources of the wireless communications system 100 (e.g., time resources (e.g., symbols, slots, subframes, frames, or the like) or frequency resources (e.g., subcarriers, carriers)) to perform various operations (e.g., wireless communications). In some implementations, the NEs 102 and the UEs 104 may support different resource structures. For example, the NEs 102 and the UEs 104 may support different frame structures. In some implementations, such as in 4G, the NEs 102 and the UEs 104 may support a single frame structure. In some other implementations, such as in 5 G and among other suitable radio access technologies, the NEs 102 and the UEs 104 may support various frame structures (i.e., multiple frame structures). The NEs 102 and the UEs 104 may support various frame structures based on one or more numerologies.
[0034] One or more numerologies may be supported in the wireless communications system 100, and a numerology may include a subcarrier spacing and a cyclic prefix. A first numerology (e.g., /r=0) may be associated with a first subcarrier spacing (e.g., 15 kHz) and a normal cyclic prefix. In some implementations, the first numerology (e.g., /r=0) associated with the first subcarrier spacing (e.g., 15 kHz) may utilize one slot per subframe. A second numerology (e.g., /r=l) may be associated with a second subcarrier spacing (e.g., 30 kHz) and a normal cyclic prefix. A third numerology (e.g., /r=2) may be associated with a third subcarrier spacing (e.g., 60 kHz) and a normal cyclic prefix or an extended cyclic prefix. A fourth numerology (e.g., /r=3) may be associated with a fourth subcarrier spacing (e.g., 120 kHz) and a normal cyclic prefix. A fifth numerology (e.g., /r=4) may be associated with a fifth subcarrier spacing (e.g., 240 kHz) and a normal cyclic prefix.
[0035] A time interval of a resource (e.g., a communication resource) may be organized according to frames (also referred to as radio frames). Each frame may have a duration, for example, a 10 millisecond (ms) duration. In some implementations, each frame may include multiple subframes. For example, each frame may include 10 subframes, and each subframe may have a duration, for example, a 1 ms duration. In some implementations, each frame may have the same duration. In some implementations, each subframe of a frame may have the same duration.
[0036] Additionally or alternatively, a time interval of a resource (e.g., a communication resource) may be organized according to slots. For example, a subframe may include a number (e.g., quantity) of slots. The number of slots in each subframe may also depend on the one or more numerologies supported in the wireless communications system 100. For instance, the first, second, third, fourth, and fifth numerologies (i.e., /r=0, jU=l , /r=2, jU=3, /r=4) associated with respective subcarrier spacings of 15 kHz, 30 kHz, 60 kHz, 120 kHz, and 240 kHz may utilize a single slot per subframe, two slots per subframe, four slots per subframe, eight slots per subframe, and 16 slots per subframe, respectively. Each slot may include a number (e.g., quantity) of symbols (e.g., OFDM symbols). In some implementations, the number (e.g., quantity) of slots for a subframe may depend on a numerology. For a normal cyclic prefix, a slot may include 14 symbols. For an extended cyclic prefix (e.g., applicable for 60 kHz subcarrier spacing), a slot may include 12 symbols. The relationship between the number of symbols per slot, the number of slots per subframe, and the number of slots per frame for a normal cyclic prefix and an extended cyclic prefix may depend on a numerology. It should be understood that reference to a first numerology (e.g., /r=0) associated with a first subcarrier spacing (e.g., 15 kHz) may be used interchangeably between subframes and slots.
[0037] In the wireless communications system 100, an electromagnetic (EM) spectrum may be split, based on frequency or wavelength, into various classes, frequency bands, frequency channels, etc. By way of example, the wireless communications system 100 may support one or multiple operating frequency bands, such as frequency range designations FR1 (410 MHz - 7.125 GHz), FR2 (24.25 GHz - 52.6 GHz), FR3 (7.125 GHz - 24.25 GHz), FR4 (52.6 GHz - 114.25 GHz), FR4a or FR4-1 (52.6 GHz - 71 GHz), and FR5 (114.25 GHz - 300 GHz). In some implementations, the NEs 102 and the UEs 104 may perform wireless communications over one or more of the operating frequency bands. In some implementations, FR1 may be used by the NEs 102 and the UEs 104, among other equipment or devices for cellular communications traffic (e.g., control information, data). In some implementations, FR2 may be used by the NEs 102 and the UEs 104, among other equipment or devices for short-range, high data rate capabilities.
[0038] FR1 may be associated with one or multiple numerologies (e.g., at least three numerologies). For example, FR1 may be associated with a first numerology (e.g., /r=0), which includes 15 kHz subcarrier spacing; a second numerology (e.g., /r=l), which includes 30 kHz subcarrier spacing; and a third numerology (e.g., /r=2), which includes 60 kHz subcarrier spacing. FR2 may be associated with one or multiple numerologies (e.g., at least 2 numerologies). For example, FR2 may be associated with a third numerology (e.g., /r=2), which includes 60 kHz subcarrier spacing; and a fourth numerology (e.g., /r=3), which includes 120 kHz subcarrier spacing.
[0039] Considering vertical-specific applications and edge applications as the major consumers of 3 GPP-provided data analytics and AI/ML support services, the AIMLE service plays role on the exposure of AI/ML services from different 3 GPP domains to the vertical/ASP in a unified manner on top of 3GPP core network and 0AM; and on defining, at a SEAL layer, value-add support services for assisting AI/ML services provided by the VAL layer, while being complementary to AI/ML support solutions provided in other 3 GPP domains.
[0040] Figure 2 illustrates a diagram 200 of an on-network AIMLE functional model in accordance with aspects of the present disclosure.
[0041] The diagram 200 comprises a UE 205, a 3GPP Network System 240, a VAL server(s) 216, ML Repository 232 and an AIMLE server 236. The UE 205 comprises a VAL 210 and a SEAL 220. The VAL 210 comprises a VAL Chent(s) 212. The SEAL 220 comprises an AIMLE client 232. The VAL server(s) 216 connects to the VAL Client(s) 212 via a VAL-UU 214. The AIMLE client 232 connects to the AIMLE server 236 via an AIML-UU 234. The 3GPP Network System 240 connects to the AIMLE server 236 via Network interfaces 235. The AIMLE server 236 connects to the VAL server(s) 216 via an AIML-S 231. The AIMLE server 236 connects to the ML Repository 232 via an AIML-R 233. The AIMLE server 236 comprises an AIML-E 238.
[0042] In the VAL 210, the VAL client 212 communicates with the VAL server 216 over VAL-UU 214 reference point. VAL-UU 214 supports both unicast and multicast delivery modes. The AIML enablement functional entities on the UE 205 and the server are grouped into AIMLE client(s) AIML-C 230 and AIML enablement server(s) 231, respectively.
[0043] AIMLE server 231 may be a newly defined SEAL server 220 which includes of a common set of services for comprehensive enablement of AIML functionality. AIMLE server 231 may define one or more of the following group of capabilities:
• Support for application-layer ML model related aspects, including model retrieval, model training, model monitoring, model selection, model update and model storage / discovery.
• Assistance in AI/ML task transfer and split AI/ML operations.
• Support HFL/VFL operations, including FL member registration, FL grouping and FL-related events notification, VFL feature alignment, HFL training.
• Support for AIMLE client registration, discovery, participation and selection.
[0044] AIMLE client functional entity acts as the application client supporting AIMLE services. ML repository is an entity that may serve as:
• a registry for ML/FL members (application layer entities participating in an AI/ML operation) and
• as a repository for application layer ML model related information.
[0045] Figure 3 illustrates a ML model lifecycle enablement diagram 300 in accordance with aspects of the present disclosure. [0046] The ML model lifecycle enablement diagram 300 comprises a VAL server(s) 316 for ML model operational workflow and an AIMLE 330 for ML model lifecycle enablement. The VAL Server(s) 316 comprises data management, model training, model evaluation, model deployment and model inference. The AIMLE 330 comprises ML model related support which comprises model retrieval, model discovery and model storage. The AIMLE 330 further comprises ML operation related support which comprises VFL/HFL enablement, TL enablement, split AI/ML operation, Data management support and FL member support e.g., grouping, register and events. The AIMLE 330 further comprises AIMLE client support which comprises AIMLE register, discovery, participate and monitoring.
[0047] One role of the AIMLE 330 is ML model lifecycle enablement which provides assistance for use cases where an ASP/VAL layer aims to find and use other application entities to perform some ML operations (e.g., ML model inference) and an AIMLE server as a mediator to accomplish that.
[0048] In some examples, a grouping of AIMLE capabilities with respect to enablement is illustrated in Figure 3 and described in Annex C.4 of TS 23.482 V19.0.0. In particular, AIMLE 330 may undertake:
• ML model related support capabilities such as model retrieval, discovery and storage (as covered in procedures in clauses 8.2 and 8.11 of TS 23.482)
• ML operation related support capabilities such as VFL/ HFL and TL enablement, Split AI/ML Operation support, Data management assistance, AI/ML task transfer, FL assistance in member grouping, registration and event notification (as covered in procedures in clauses 8.4, 8.6, 8.12, 8.14, 8.15-8.18 of TS 23.482 V19.0.0).
• AIMLE client related support capabilities, including AIMLE client registration, discovery, participation, monitoring, selection (as covered in procedures in clauses 8.7-8.10, 8.13 of TS 23.482 V19.0.0).
[0049] Figure 4 illustrates a diagram 400 showing an example of a hierarchical deployment of AIMLE in accordance with aspects of the present disclosure. [0050] Diagram 400 comprises an EDN Al 460, EDN A2 462, a Centralised DN (DNN-B) 464 and PLMN 440. EDN Al 460 comprises a first EAS 454, a first EES 450 and AIMLE server #1.1 436. EDN A2 464 comprises a second EAS 456, a second EES 452 and AIMLE server #1.2 437. The Centralized DN 464 comprises VAL server 416 and AIMLE server #1 438 (e.g., the central AIMLE server 438). PLMN 440 comprises AIMLE 1.1 - service area 466 and AIMLE 1.2 - service area 468.
[0051] One deployment is discussed in TS 23.482 V19.0.0, where multiple AIMLE servers (e.g., AIMLE server #1.1 436 and AIMLE server #1.2 437) may be located at different EDNs/DNs (e.g., EDN Al 460 and EDN A2462) and can be deployed by the same provider. Such hierarchical deployments allow the local - global ML operations (e.g., FL across domains).
[0052] The ML support services that the edge deployed AIMLE server correspond to the AIMLE service areas (e.g., AIMLE 1.1 - service area 466 and AIMLE 1.2 - service area), which is equivalent to the EDN service areas. The central AIMLE server 438 covers all PLMN 440 area and is used to coordinate the ML related operations (e.g., FL server / aggregator) with the distributed AIMLE servers (e.g., AIMLE server #1.1 436 and AIMLE server #1.2 437).
[0053] Figure 5 illustrates a diagram 500 showing relationships of VAL services in accordance with aspects of the present disclosure.
[0054] The diagram 500 comprises a VAL user 510, a VAL service provider 512, a SEAL provider 520, a Home PLMN operator 540 and a Visited PLMN operator 542.
[0055] There may be different business relationships that exist for SEAL services as provided in TS 23.434 V19.4.2 clause 5, such as the AIMLE functionality and that may be used to support a single VAL user 510.
[0056] The VAL user 510 belongs to a VAL service provider 512 based on a VAL service agreement between the VAL user 510 and the VAL service provider 512. The VAL service provider 512 may have VAL service agreements with several VAL users 510. The VAL user 510 may have VAL service agreements with several VAL service providers 512. [0057] The VAL service provider 512 and the home PLMN operator 540 may be part of the same organization, in which case the business relationship between the two is internal to a single organization.
[0058] The VAL service provider 512 may have SEAL provider arrangements with multiple SEAL providers 520 and the SEAL provider 520 may have PLMN operator service arrangements with multiple home PLMN operators 540. The SEAL provider 520 and the VAL service provider 512 or the home PLMN operator 540 may be part of the same organization, in which case the business relationship between the two is internal to a single organization.
[0059] The home PLMN operator 540 may have PLMN operator service arrangements with multiple VAL service providers 512 and the VAL service provider may have PLMN operator service arrangements with multiple home PLMN operators 540. As part of the PLMN operator service arrangement between the VAL service provider 512 and the home PLMN operator 540, PLMN subscription arrangements may be provided which allows the VAL UEs to register with the home PLMN operator 540 network.
[0060] The home PLMN operator 540 may have PLMN roaming agreements with multiple visited PLMN operators 542 and the visited PLMN operator 542 may have PLMN roaming agreements with multiple home PLMN operators 540.
[0061] Figure 6 illustrates a diagram 600 showing a multi-operator scenario in accordance with aspects of the present disclosure.
[0062] The diagram 600 comprises a first UE 605, a second UE 607, a first AIMLE client 632 connected to an AIMLE server 636, a second AIMLE client 634 connected to the AIMLE server 636, a first 3GPP Network 640 connected to the AIMLE server 636, a second 3 GPP network 642 connected to the AIMLE server 636 and VAL servers 616 connected to the AIMLE server 636. The first 3 GPP Network 640 comprises an AIMLE Server 1.1 635. The second 3GPP Network 642 comprises an AIMLE Server 1.2 637.
AIMLE Server 1.1 635 and AIMLE Server 1.2 637 connect to the AIMLE Server 636 (e.g., Central AIMLE Server 636). [0063] Some examples described herein relate to an application service (e.g., VAL service, ASP service, AI/ML session or service) that runs in a certain service area, which may be an edge or cloud service area or geographical area, where UEs connected to different PLMNs are present. Such service may be considered multi-operator service.
[0064] The VAL UEs may have AIMLE clients installed and active and connect to different AIMLE servers provided by different MNOs or trusted 3rd parties of the MNOs. The application service may be an application enablement service (e.g., AIMLE client to server) where AIMLE-UU may be over multiple MNO networks or it may be a VAL service (e.g., VAL client to server) where VAL-UU may be over multiple PLMNs.
[0065] In diagram 600, a Central AIMLE server 636 connects to two PLMNs (including different RATs, e.g. 5G, 6G), whereas AIMLE server 1.1 635 and AIMLE server 1.2 637 connect to single PLMNs.
[0066] The first UE 605 is connected with PLMN1 and AIMLE server 1.1 635 and the second UE 607 is connected with PLMN 2 and AIMLE server 1.2 637. Different VAL servers 616 may connect to the first UE 605 VAL client and different VAL server to the second UE; however, both VAL servers 616 are connected to the same AIMLE server 636.
[0067] In some examples described herein, an Al operation running for a V2X scenario with VAL UEs of a traffic safety or optimization service (Al-enhanced) belonging to different networks, and AIMLE server 1.1 635 is provided by a service provider /MNO in an edge area, whereas AIMLE server 1.2 637 is provided by a regional network operator. The AIMLE server 636 may be a cloud provider or 3rd party VMNO connecting to both MN01 and MNO2.
[0068] Some examples described herein relate to a mechanism for discovering and selecting AI/ML participants (AIMLE clients, VAL clients) in an AIMLE-assisted operation (e.g., ML model training, FL, TL) in a given service area assuming a hierarchical AIMLE deployment model, where the AIMLE servers are connected to different PLMNs.
[0069] Some examples described herein relate to enhancements in existing AIMLE services and AIMLE-E interfaces for supporting cross-PLMN interactions; wherein the enhancements related to receiving and processing information on the serving PLMN and allowable list of PLMNs where the VAL server/client (as consumer) may receive AIMLE service.
[0070] Some examples described herein relate to determining and discovering AIMLE clients / UEs which are not within the serving PLMNs of the VAL. Hence, this step enables the central AIMLE server 636 get supplementary AI/ML participants via edge deployed AIMLE servers when there is lack of availability or capability of the VAL UEs which are connected to the serving PLMN.
[0071] Some examples described herein relate to the enhancements to the interactions among AIMLE servers for supporting AIMLE client discovery in multi-PLMN scenarios. Some examples described herein relate to the enhancements to the interactions among AIMLE servers for supporting AIML participant selection in multi-PLMN scenarios.
[0072] The allowed PLMN may be the PLMN that the UE can roam or connected via dual connectivity (registered but not connected to the target PLMN). In certain models, the allowed PLMNs correspond to the PLMNs that the Server (AIMLE service provider or VAL provider or ASP) has service agreements; however, there are not active connections to VAL UEs or there is no network service consumed by the Server from the other PLMNs at the allowed list.
[0073] The preferred PLMN is part of the allowed, but the VAL UE has certain priority due to better subscription from the service agreement between the ASP / SEAL provider and the preferred PLMN. The serving PLMN corresponds to the PLMN providing network services and is used for supporting the communication to the AIMLE clients / VAL clients (at the corresponding VAL UEs).
[0074] There may be two different options for supporting the multi-operator scenario in both the discovery and selection procedures. Based on VAL server request, the AIMLE server initially checks both PLMNs for best AIMLE clients and discovers/selects. Based on VAL server request and its preference for a specific PLMN for communicating the service, the AIMLE server checks the primary/preferred PLMN first and then get supplementary clients from other PLMNs if needed. [0075] The AI/ML operation which requires cross-PLMN support may also expand to support VFL scenarios where the server may find clients from multiple PLMNs in order to train the model. The requirements may include a PLMN list and required number of common samples to be supported by each FL client.
[0076] Figure 7 illustrates an example of a process flow 700 for AIMLE client discovery in hierarchical AIMLE deployments in accordance with aspects of the present disclosure. The process flow 700 may implement or be implemented by aspects of the wireless communication system 100. For example, the process flow 700 may include VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737, which may be one or more examples of devices described herein with reference to Figure 1.
[0077] The process flow 700 may be referred to as a procedure, including one or more operations performed by one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737.
[0078] In the following description of the process flow 700, the operations or signalling performed between one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737 may be performed or signalled (e.g., transmitted, received) in a different order than the example order shown, or the operations or signalling performed by one or more of the VAL server 716, Central AIMLE Server 736, AIMLE Server 1.1 @PLMN1 735 and AIMLE Server 1.2 @PLMN2 737 may be performed or signalled (e.g., transmitted, received) in different orders or at different times. Some operations or signalling may also be omitted from the process flow 700. Additionally, although some operations or signalling may be shown to occur at different times, these operations or signalling may occur at the same time or in overlapping time periods.
[0079] Discovery of AIMLE clients is an important step in the AI/ML process for distributed, federated, split AI/ML, and transfer learning. Due to the nature of such learning, VAL servers 716 may discover suitable AIMLE clients to fulfil the requirements for the AI/ML application. The VAL server 716 may then use the discovered AIMLE clients to select a set of AIMLE clients to perform AI/ML operations. [0080] In cross-PLMN scenario as in process flow 700, the following enhancements are provided (over AIMLE-S or SEAL-X interface if the AIMLE consumer is another SEAL server, e.g., AD AES).
[0081] In some examples described herein, AIMLE clients that support AI/ML operations have registered with the corresponding AIMLE server (e.g., AIMLE server 1.1 735 or AIMLE server 1.2 737) and included their AIMLE client profiles and optionally a list of supported services. The central AIMLE server may access a ML repository to obtain AIMLE client profiles and supported services associated with AIMLE clients.
[0082] In step 771 of process flow 700, the VAL server 716 sends an AIMLE client discovery request to a central AIMLE server 736 to discover a list of AIMLE clients that are available to participate in AI/ML operations (e.g., is available and have required data to train an ML model). The request may also include AIML client task capability requirements to discover clients who can perform the AIML tasks like AIML model training/offload/split with compute requirements and task performance preference like green task performance. This request may also include information on the serving and/or preferred PLMN for the communication of the VAL service, as well as the allowed MNO info for other MNO network from which the AIMLE consumer / VAL server can discover AIMLE clients / VAL UEs. Table 1 describes information elements that may be included in the AIMLE client discovery request. Table 2 described information elements that may be included in the AIMLE client discovery criteria. Table 1 AIMLE client discovery request
Table 2 AIMLE client discovery criteria
[0083] The central AIMLE server 736 performs authentication, and authorization checks to determine if the requestor is able to discover AIMLE clients for the VAL UEs connected to the serving PLMN.
[0084] In step 772, the central AIMLE server 736 determines to discover and discovers the AIMLE server(s) corresponding to the target PLMNs as provided in the allowed MNO info and serving PLMN.
[0085] In step 773, the central AIMLE server 736 sends an AIMLE client discovery request to the AIMLE Server 1.1 735 (of serving PLMN), where the request includes: the requestor VAL server ID and address, the service ID and profile/requirements, the capability requirements and discovery filter criteria (which can be a subset of parameters of Table 8.8.3.1-2 of TS 23.482 with the addition of the new parameters as in step 771).
[0086] In step 774, the central AIMLE server 736 receives an AIMLE client discovery response from the AIMLE Server 1.1 735 (of serving PLMN), where the response includes the discovered AIMLE client IDs and addresses connected to the AIMLE server 1.1 (of PLMN 1 ) in the given service area. [0087] In step 775, the central AIMLE server 736 identifies that the number of AIMLE clients are not sufficient for the AI/ML operation / AIMLE service or that more AIMLE clients are required based on the VAL requirements, and triggers to discover AIMLE clients from AIMLE Server 1.2 (PLMN2).
[0088] In step 776, the central AIMLE server 736 repeats steps 773-774 for the AIMLE Server 1.2 737 for fetching the discovered AIMLE client of AIMLE server 1.2 737.
[0089] The central AIMLE server 736 sends an AIMLE client discovery response which may include the cross-PLMN information with the PLMN ID/name from which the discovered VAL UEs were provided. If the required number of AIMLE clients are not included in the response message from the central AIMLE server 736, then the AIML service consumer may discover the remaining required AIMLE clients directly from the corresponding AIMLE server 1.1 735 or AIMLE server 1.2 737 via providing the alternative/target/destination PLMN or based on the AIMLE server ID/address indicating the required number of AIMLE clients from the destination PLMN. Table 3 describes the information elements that may be included in the AIMLE client discovery response.
Table 3 AIMLE client discovery response
[0090] Figure 8 illustrates an example of a process flow 800 for AIMLE client selection in hierarchical AIMLE deployments in accordance with aspects of the present disclosure.
[0091] Figure 8 illustrates an example of a process flow 800 in accordance with aspects of the present disclosure. The process flow 800 may implement or be implemented by aspects of the wireless communication system 800. For example, the process flow 800 may include VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832, which may be one or more examples of devices described herein with reference to Figure 1.
[0092] The process flow 800 may be referred to as a procedure, including one or more operations performed by one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832. [0093] In the following description of the process flow 800, the operations or signalling performed between one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832 may be performed or signalled (e.g., transmitted, received) in a different order than the example order shown, or the operations or signalling performed by one or more of the VAL server 816, AIMLE Server 1.1 @PLMN1 835, AIMLE Server 1.2 @PLMN2 837, Central AIMLE Server 836 and ML repository 832 may be performed or signalled (e.g., transmitted, received) in different orders or at different times. Some operations or signalling may also be omitted from the process flow 800. Additionally, although some operations or signalling may be shown to occur at different times, these operations or signalling may occur at the same time or in overlapping time periods.
[0094] Process flow 800 focuses on the interaction among AIMLE servers (central and PLMN-specific). AIMLE clients that support AI/ML operations may have registered with the central AIMLE server 836 and included their AIMLE client profiles and optionally a list of supported services. The central AIMLE server 836 may access the ML repository 832 to obtain AIMLE client profiles and supported services associated with the AIMLE clients. The VAL server 816 (e.g., AIMLE consumer) may have discovered the AIMLE server 1.1 835 (and optionally its AIMLE clients) based on the discovery procedure e.g., process flow 700.
[0095] In step 871 of process flow 800, the VAL server 816 sends an AIMLE client selection request to AIMLE server #1.1 835 to select AIMLE clients available for participation in AI/ML operations (e.g., is available and have required data to train an ML model) based on the discovery procedure. Table 4 describes the information elements that may be included in the AIMLE client selection request.
Table 4 AIMLE client selection request
[0096] In step 872, the AIMLE server #1.1 835 performs authentication and authorization checks to determine if the requestor is able to select AIMLE clients. The AIMLE server #1.1 835 based on the request, evaluates whether the AIMLE clients are capable of acting as ML/FL participants for the AI/ML operation (AIMLE or VAL service or analytics service). In this step, interaction with the ML repository 832 may be used to identify the availability status of the AIMLE clients. If more AIMLE clients are needed, the AIMLE server 1.1 835 determines to request assistance from other AIMLE server of other MNOs (e.g., AIMLE Server 1.2 837).
[0097] In step 873, the AIMLE server #1.1 835 requests from the central AIMLE server 836 to discover and fetch information on other AIMLE servers covering the same service area for which the request applies. Such request can be in form of a trigger event, where the central AIMLE server 836 is expected to determine an action for discovering additional AIMLE clients from other AIMLE servers. Such request also includes the allowed MNO info for the VAL server 816 (e.g., to identify allowed PLMN-specific AIMLE servers).
[0098] In step 874, the ML repository 832 may keep track of the AIMLE client or more generally ML/FL candidate members which are matched per PLMN, the central AIMLE server 836 fetches the list of available FL/ML members (and/or VAL UEs) for the requested PLMN ID.
[0099] In step 875, alternatively or complementary to Step 874, the central AIMLE server 836 queries the availability of AIMLE clients to AIMLE server#1.2 837 for the AI/ML operation (AIMLE or VAL service or analytics service).
[0100] In step 876, based on the discovered AIMLE clients / ML or FL members from AIMLE server 1.2 837 (directly or via the ML repository 832), the central AIMLE server 836 selects which entities are going to act as ML/FL members for the AI/ML operation (e.g. FL, ML model inference, training). Such selection is based on the permissions/capabilities and authorizations/agreements of the VAL applications with the additional MNO(s). Also, such selection may be based on different factors such as a rating or weights for the given operation (AIMLE clients of different MNO can have less rating given the permissions/limitations etc).
[0101] In step 877, the central AIMLE server 836 sends the selected AIMLE clients of AIMLE server #1.2 837 to the AIMLE server #1.1 835, where the message includes the list of AIMLE client IDs/addresses as well as the AIMLE server #1.2 ID and address as well as cross-PLMN information and permissions/capabilities/limitations corresponding to the selected AIMLE clients of the second AIMLE server 837.
[0102] This message may also include the service area of AIMLE server #1.2 837 (e.g., topological or geographical or EDN area) in case there is a partial overlapping of coverages between AIMLE server #1.1 835 and AIMLE server #1.2.
[0103] Such information may be also provided in earlier steps (e.g., step 874 based on ML repository info, step 876 as factor for selecting an AIMLE client over others from other AIMLE servers)
[0104] In step 878, the AIMLE server 1.1 835 sends an AIMLE client selection response. Table 5 described the information elements that may be included in the AIMLE client selection response. Table 5 AIMLE client selection response
[0105] Figure 9 illustrates an example of a UE 900 in accordance with aspects of the present disclosure. The UE 900 may include a processor 902, a memory 904, a controller 906, and a transceiver 908. The processor 902, the memory 904, the controller 906, or the transceiver 908, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
[0106] The processor 902, the memory 904, the controller 906, or the transceiver 908, or various combinations or components thereof may be implemented in hardware (e.g., circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
[0107] The processor 902 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 902 may be configured to operate the memory 904. In some other implementations, the memory 904 may be integrated into the processor 902. The processor 902 may be configured to execute computer-readable instructions stored in the memory 904 to cause the UE 900 to perform various functions of the present disclosure.
[0108] The memory 904 may include volatile or non-volatile memory. The memory 904 may store computer-readable, computer-executable code including instructions when executed by the processor 902 cause the UE 900 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such the memory 904 or another type of memory. Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer. [0109] In some implementations, the processor 902 and the memory 904 coupled with the processor 902 may be configured to cause the UE 900 to perform one or more of the functions described herein (e.g., executing, by the processor 902, instructions stored in the memory 904). For example, the processor 902 may support wireless communication at the UE 900 in accordance with examples as disclosed herein. The UE 900 may be configured to support the arrangements described herein.
[0110] The controller 906 may manage input and output signals for the UE 900. The controller 906 may also manage peripherals not integrated into the UE 900. In some implementations, the controller 906 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems. In some implementations, the controller 906 may be implemented as part of the processor 902.
[0111] In some implementations, the UE 900 may include at least one transceiver 908. In some other implementations, the UE 900 may have more than one transceiver 908. The transceiver 908 may represent a wireless transceiver. The transceiver 908 may include one or more receiver chains 910, one or more transmitter chains 912, or a combination thereof.
[0112] A receiver chain 910 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium. For example, the receiver chain 910 may include one or more antennas for receive the signal over the air or wireless medium. The receiver chain 910 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal. The receiver chain 910 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal. The receiver chain 910 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
[0113] A transmitter chain 912 may be configured to generate and transmit signals (e.g., control information, data, packets). The transmitter chain 912 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium. The at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM). The transmitter chain 912 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium. The transmitter chain 912 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
[0114] Figure 10 illustrates an example of a processor 1000 in accordance with aspects of the present disclosure. The processor 1000 may be an example of a processor configured to perform various operations in accordance with examples as described herein. The processor 1000 may include a controller 1002 configured to perform various operations in accordance with examples as described herein. The processor 1000 may optionally include at least one memory 1004, which may be, for example, an L1/L2/L3 cache. Additionally, or alternatively, the processor 1000 may optionally include one or more arithmetic-logic units (ALUs) 1006. One or more of these components may be in electronic communication or otherwise coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces (e.g., buses).
[0115] The processor 1000 may be a processor chipset and include a protocol stack (e.g., a software stack) executed by the processor chipset to perform various operations (e.g., receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) in accordance with examples as described herein. The processor chipset may include one or more cores, one or more caches (e.g., memory local to or included in the processor chipset (e.g., the processor 1000) or other memory (e.g., random access memory (RAM), read-only memory (ROM), dynamic RAM (DRAM), synchronous dynamic RAM (SDRAM), static RAM (SRAM), ferroelectric RAM (FeRAM), magnetic RAM (MRAM), resistive RAM (RRAM), flash memory, phase change memory (PCM), and others).
[0116] The controller 1002 may be configured to manage and coordinate various operations (e.g., signalling, receiving, obtaining, retrieving, transmitting, outputting, forwarding, storing, determining, identifying, accessing, writing, reading) of the processor 1000 to cause the processor 1000 to support various operations in accordance with examples as described herein. For example, the controller 1002 may operate as a control unit of the processor 1000, generating control signals that manage the operation of various components of the processor 1000. These control signals include enabling or disabling functional units, selecting data paths, initiating memory access, and coordinating timing of operations.
[0117] The controller 1002 may be configured to fetch (e.g., obtain, retrieve, receive) instructions from the memory 1004 and determine subsequent instruction(s) to be executed to cause the processor 1000 to support various operations in accordance with examples as described herein. The controller 1002 may be configured to track memory address of instructions associated with the memory 1004. The controller 1002 may be configured to decode instructions to determine the operation to be performed and the operands involved. For example, the controller 1002 may be configured to interpret the instruction and determine control signals to be output to other components of the processor 1000 to cause the processor 1000 to support various operations in accordance with examples as described herein. Additionally, or alternatively, the controller 1002 may be configured to manage flow of data within the processor 1000. The controller 1002 may be configured to control transfer of data between registers, arithmetic logic units (ALUs), and other functional units of the processor 1000.
[0118] The memory 1004 may include one or more caches (e.g., memory local to or included in the processor 1000 or other memory, such RAM, ROM, DRAM, SDRAM, SRAM, MRAM, flash memory, etc. In some implementations, the memory 1004 may reside within or on a processor chipset (e.g., local to the processor 1000). In some other implementations, the memory 1004 may reside external to the processor chipset (e.g., remote to the processor 1000).
[0119] The memory 1004 may store computer-readable, computer-executable code including instructions that, when executed by the processor 1000, cause the processor 1000 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such as system memory or another type of memory. The controller 1002 and/or the processor 1000 may be configured to execute computer-readable instructions stored in the memory 1004 to cause the processor 1000 to perform various functions. For example, the processor 1000 and/or the controller 1002 may be coupled with or to the memory 1004, the processor 1000, the controller 1002, and the memory 1004 may be configured to perform various functions described herein. In some examples, the processor 1000 may include multiple processors and the memory 1004 may include multiple memories. One or more of the multiple processors may be coupled with one or more of the multiple memories, which may, individually or collectively, be configured to perform various functions herein.
[0120] The one or more ALUs 1006 may be configured to support various operations in accordance with examples as described herein. In some implementations, the one or more ALUs 1006 may reside within or on a processor chipset (e.g., the processor 1000). In some other implementations, the one or more ALUs 1006 may reside external to the processor chipset (e.g., the processor 1000). One or more ALUs 1006 may perform one or more computations such as addition, subtraction, multiplication, and division on data. For example, one or more ALUs 1006 may receive input operands and an operation code, which determines an operation to be executed. One or more ALUs 1006 be configured with a variety of logical and arithmetic circuits, including adders, subtractors, shifters, and logic gates, to process and manipulate the data according to the operation. Additionally, or alternatively, the one or more ALUs 1006 may support logical operations such as AND, OR, exclusive-OR (XOR), not-OR (NOR), and not-AND (NAND), enabling the one or more ALUs 1006 to handle conditional operations, comparisons, and bitwise operations.
[0121] The processor 1000 may support wireless communication in accordance with examples as disclosed herein. The processor 100 may be configured to or operable to support a means for receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information. Alternatively, the processor 1000 may be configured to or operable to support a means for transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
[0122] Figure 11 illustrates an example of a NE 1100 in accordance with aspects of the present disclosure. The NE 1100 may include a processor 1102, a memory 1104, a controller 1106, and a transceiver 1108. The processor 1102, the memory 1104, the controller 1106, or the transceiver 1108, or various combinations thereof or various components thereof may be examples of means for performing various aspects of the present disclosure as described herein. These components may be coupled (e.g., operatively, communicatively, functionally, electronically, electrically) via one or more interfaces.
[0123] The processor 1102, the memory 1104, the controller 1106, or the transceiver 1108, or various combinations or components thereof may be implemented in hardware (e.g., circuitry). The hardware may include a processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or other programmable logic device, or any combination thereof configured as or otherwise supporting a means for performing the functions described in the present disclosure.
[0124] The processor 1102 may include an intelligent hardware device (e.g., a general- purpose processor, a DSP, a CPU, an ASIC, an FPGA, or any combination thereof). In some implementations, the processor 1102 may be configured to operate the memory 1104. In some other implementations, the memory 1104 may be integrated into the processor 1102. The processor 1102 may be configured to execute computer-readable instructions stored in the memory 1104 to cause the NE 1100 to perform various functions of the present disclosure.
[0125] The memory 1104 may include volatile or non-volatile memory. The memory 1104 may store computer-readable, computer-executable code including instructions when executed by the processor 1102 cause the NE 1100 to perform various functions described herein. The code may be stored in a non-transitory computer-readable medium such the memory 1104 or another type of memory. Computer-readable media includes both non- transitory computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A non-transitory storage medium may be any available medium that may be accessed by a general-purpose or special-purpose computer.
[0126] In some implementations, the processor 1102 and the memory 1104 coupled with the processor 1102 may be configured to cause the NE 1100 to perform one or more of the functions described herein (e.g., executing, by the processor 1102, instructions stored in the memory 1104). For example, the processor 1102 may support wireless communication at the NE 1100 in accordance with examples as disclosed herein. The NE 1100 may be configured to support a means for receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information. Alternatively, the NE 1100 may be configured to or operable to support a means for transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
[0127] The controller 1106 may manage input and output signals for the NE 1100. The controller 1106 may also manage peripherals not integrated into the NE 1100. In some implementations, the controller 1106 may utilize an operating system such as iOS®, ANDROID®, WINDOWS®, or other operating systems. In some implementations, the controller 1106 may be implemented as part of the processor 1102.
[0128] In some implementations, the NE 1100 may include at least one transceiver 1108. In some other implementations, the NE 1100 may have more than one transceiver 1108. The transceiver 1108 may represent a wireless transceiver. The transceiver 1108 may include one or more receiver chains 1110, one or more transmitter chains 1112, or a combination thereof.
[0129] A receiver chain 1110 may be configured to receive signals (e.g., control information, data, packets) over a wireless medium. For example, the receiver chain 1110 may include one or more antennas for receive the signal over the air or wireless medium. The receiver chain 1110 may include at least one amplifier (e.g., a low-noise amplifier (LNA)) configured to amplify the received signal. The receiver chain 1110 may include at least one demodulator configured to demodulate the receive signal and obtain the transmitted data by reversing the modulation technique applied during transmission of the signal. The receiver chain 1110 may include at least one decoder for decoding the processing the demodulated signal to receive the transmitted data.
[0130] A transmitter chain 1112 may be configured to generate and transmit signals (e.g., control information, data, packets). The transmitter chain 1112 may include at least one modulator for modulating data onto a carrier signal, preparing the signal for transmission over a wireless medium. The at least one modulator may be configured to support one or more techniques such as amplitude modulation (AM), frequency modulation (FM), or digital modulation schemes like phase-shift keying (PSK) or quadrature amplitude modulation (QAM). The transmitter chain 1112 may also include at least one power amplifier configured to amplify the modulated signal to an appropriate power level suitable for transmission over the wireless medium. The transmitter chain 1112 may also include one or more antennas for transmitting the amplified signal into the air or wireless medium.
[0131] Figure 12 illustrates a flowchart of a method 1200 in accordance with aspects of the present disclosure. The operations of the method may be implemented by a NE as described herein. In some implementations, the NE may execute a set of instructions to control the function elements of the NE to perform the described functions. [0132] At 1202, the method 1200 may include receiving, from a first network entity, a request to support a model operation according to a discovery criterion. The operations of 1202 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1202 may be performed by a NE as described with reference to Figure 11.
[0133] At 1204, the method 1200 may include determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion. The operations of 1204 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1204 may be performed by a NE as described with reference to Figure 11.
[0134] At 1206, the method 1200 may include determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion. The operations of 1206 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1206 may be performed a NE as described with reference to Figure 11.
[0135] At 1208, the method may include transmitting, to the first network entity, a response message comprising the first information and the second information. The operations of 1208 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1208 may be performed by a NE as described with reference to Figure 11.
[0136] It should be noted that the method 1200 described herein describes a possible implementation, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible.
[0137] Figure 13 illustrates a flowchart of a method 1300 in accordance with aspects of the present disclosure. The operations of the method 1300 may be implemented by a NE as described herein. In some implementations, the NE may execute a set of instructions to control the function elements of the NE to perform the described functions. [0138] At 1302, the method 1300 may include transmitting, to a central network entity, a request to support a model operation according to a discovery criterion. The operations of 1302 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1302 may be performed by a NE as described with reference to Figure 11.
[0139] At 1304, the method 1300 may include receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion. The operations of 1304 may be performed in accordance with examples as described herein. In some implementations, aspects of the operations of 1304 may be performed by a NE as described with reference to Figure 11.
[0140] There is provided a central network entity for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the central network entity to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information.
[0141] Such a central network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
[0142] The central network entity may be a central AIMLE server. The first distributed network entity may be a first distributed AIMLE server. The first distributed network entity may be part of a first PLMN. The first wireless communication network may be the first PLMN. The second distributed network entity may be part of a second PLMN. The second wireless communication network may be the second PLMN. The second distributed network entity may be a second distributed AIMLE server. The first network entity may be a VAL server. The model operation may be an Al model operation. The model operation may be an application service. The model operation may be a VAL service. The model operation may be an ASP service. The model operation may be an AI/ML session. The model operation may be an AI/ML service. The model operation may be an ML model operation. The model operation may be an AI/ML model operation. The model operation may be an operation of a model. The model may be an AI/ML model. The model operation may be an AIMLE-assisted operation. The discovery criterion may be an AIMLE client discovery criteria. The request to support the model operation may be an AIMLE client discovery request. The response message may be an AIMLE client discovery response.
[0143] The first node may be a first user equipment. The first node may be connected to the first wireless communication network. The second node may be a second user equipment. The second node may be connected to the second wireless communication network. The first wireless communication network may be a first PLMN. The first wireless communication network may be a first NPN. The second wireless communication network may be a second PLMN. The second wireless communication network may be a second NPN.
[0144] The first information corresponding to the first node for supporting at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 3 above. The first information corresponding to the first node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 5 above.
[0145] The second information corresponding to the second node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 3 above. The second information corresponding to the second node for supporting the at least part of the model operation according to the discovery criterion may comprise at least one of the information elements described in Table 5 above.
[0146] The model operation may be a ML task or ML model task. The model operation may be at least one of: a ML model training operation, a ML model inference operation, a FL training operation, a FL inference operation, a VFL or HFL operation, a Transfer Learning operation, a ML model lifecycle operation, a ML model pipeline and a workflow operation.
[0147] The discovery criterion may be at least one of the information elements described in Table 2 above. The discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
[0148] The service area identifier may correspond to a service area. The service area may be an ASP coverage area. The service area may be an edge coverage area. The service area may be a cloud coverage area. The service area may comprise the first wireless communication network. The service area may comprise the second wireless communication network. The service area may correspond to the coverage of the serving wireless communication network.
[0149] The allowed mobile network operator may permit subscribers for consuming an AIMLE service. The allowed mobile network operator may be an MNO name. The allowed mobile network operator may be a PLMN ID. The allowed mobile network operator may be an operator of the first wireless communication network. The allowed mobile network operator may be an operator of the second wireless communication network.
[0150] The information for accessing the first wireless communication network may comprise a permission for accessing the first wireless communication network. The information for accessing the first wireless communication network may comprise an authorization for accessing the first wireless communication network. The information for accessing the first wireless communication network may comprise a limitation on a capability of an entity connected to the first wireless communication network. The information for accessing the second wireless communication network may comprise a permission for accessing the second wireless communication network. The information for accessing the second wireless communication network may comprise an authorization for accessing the second wireless communication network. The information for accessing the second wireless communication network may comprise a limitation on a capability of an entity connected to the second wireless communication network.
[0151] The at least one processor may be further configured to cause the central network entity to: receive, from a second network entity, third information corresponding to the first distributed network entity or fourth information corresponding to the second distributed network entity. The second network entity may comprise a repository. The second network entity may comprise a common API framework core function.
[0152] The at least one processor may be further configured to cause the central network entity to: determine a quantity of distributed network entities to support the model operation according to the discovery criterion. The quantity of distributed network entities may be based on a performance requirement of the model operation. The performance requirement of the model operation may be based on a quantity of nodes supporting the model operation.
[0153] The at least one processor may be further configured to cause the central network entity to: receive, from the first distributed network entity, an indication of a request for an availability or capability of the second node to support the model operation. The indication of the request for the availability of the second node to support the model operation may be an AIMLE client selection request.
[0154] The at least one processor may be further configured to cause the central network entity to: transmit, to the first distributed network entity, an indication of the availability or the capability of the second node to support the model operation. The indication of the availability of the second node to support the model operation may be a AIMLE client selection response. [0155] The at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function; a cloud platform function, and an application function.
[0156] The at least one processor may be further configured to cause the central network entity to: identify the first distributed network entity or the second distributed network entity based on the discovery criterion. The at least one processor being configured to cause the central network entity to identify the first distributed network entity or the second distributed network entity may comprise the at least one processor being further configured to cause the central network entity to: obtain an identity of the first distributed network entity or an identify of the second distributed network entity from a machine learning repository.
[0157] There is also provided a method performed or performable by a central network entity, the method comprising: receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
[0158] Such a method performed or performable by the central network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
[0159] The discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
[0160] The method may further comprise receiving, from a second network entity, third information corresponding to the first distributed network entity or fourth information corresponding to the second distributed network entity. The method may further comprise determining a quantity of distributed network entities to support the model operation according to the discovery criterion. The quantity of distributed network entities may be based on a performance requirement of the model operation. The performance requirement of the model operation may be based on a quantity of nodes supporting the model operation.
[0161] The method may further comprise receiving, from the first distributed network entity, an indication of a request for an availability or capability of the second node to support the model operation. The method may further comprise transmitting, to the first distributed network entity, an indication of the availability or the capability of the second node to support the model operation.
[0162] The at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function; a cloud platform function, and an application function.
[0163] The method may further comprise identifying the first distributed network entity or the second distributed network entity based on the discovery criterion. Identifying the first distributed network entity or the second distributed network entity may comprise obtaining an identity of the first distributed network entity or an identify of the second distributed network entity from a machine learning repository.
[0164] There is also provided a first network entity for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the first network entity to: transmit, to a central network entity, a request to support a model operation according to a discovery criterion; and receive, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
[0165] Such a first network entity tends to provide improved support for the model operation by providing the plurality of nodes over different wireless communication networks via the first distributed network entity and the second distributed network entity.
[0166] The discovery criterion may comprise at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
[0167] The at least one of the first distributed network entity and the second distributed network entity may comprise at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function, a cloud platform function and an application function.
[0168] There is also provided a method performed or performable by a first network entity, the method comprising: transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
[0169] A central network entity for wireless communication is described. The central network entity may be configured to, capable of, or operable to perform one or more operations as described herein. A method performed or performable by the central network entity is described herein. A processor for wireless communication is described. The processor may be configured to, capable of, or operable to perform one or more operations as described herein.
[0170] A first network entity for wireless communication is described. The first network entity may be configured to, capable of, or operable to perform one or more operations as described herein. A method performed or performable by the first network entity is described herein. A processor for wireless communication is described. The processor may be configured to, capable of, or operable to perform one or more operations as described herein.
[0171] Examples described herein may relate to how to enable the discovery selection of AIMLE clients in a target area covered by one or more edge service areas operated by different PLMNs by a central/cloud deployed AIMLE server, where there is no direct access with the VAL UEs per PLMN.
[0172] Examples described herein may relate to providing a method for discovering and selecting AI/ML participants (AIMLE clients, VAL clients) in an AIMLE-assisted operation (e.g. ML model training, FL, TL) in a given service area assuming a hierarchical AIMLE deployment model, where the AIMLE servers are connected to different PLMNs. Previous solutions do not consider multi-operator aspects, and related interaction among AI/ML servers in cloud / edge and core networks.
[0173] There is also provided an application enablement function. There is also provided a method at a first entity for controlling the operation of an ML task, wherein the ML task is communicated via a plurality of wireless communication networks, the method comprising: Receiving a requirement for discovering a plurality of nodes for operating an ML task, wherein the plurality of nodes is connected to a plurality of wireless communication networks; Determining a set of application entities, each entity is associated with a wireless communication network from the plurality of networks, wherein each application entity is configured to support part of the ML task for the corresponding wireless communication network; Sending a request to the corresponding application entity to discover the nodes connected each of the associated wireless communication network; and Receiving information from the corresponding application entities, wherein the information comprises the identity of the least one node associated with at least one from the plurality of wireless communication networks.
[0174] The method may further comprise sending information on the discovered nodes to a second application. The set of application entities may be application enablement entities, SEAL entities, analytics functions, application functions. The requirement may comprise information on the allowed mobile network operator networks, permissions and/or authorizations when accessing each of the wireless communication networks, information on the serving PLMN, information on a preferred PLMN by the first application, a service area corresponding to the coverage of the service PLMN, or a combination thereof.
[0175] The method may further comprise discovering the set of application entities via fetching information from a repository, a common API framework core function or a combination thereof. The method may further comprise selecting the minimum required application entities from the determined set of application entities based on the requirement; and in particular the serving and/or preferred network associated with the second application. Selecting the minimum required application entities may be based on the performance requirement of the ML task, wherein the performance requirement is based on the minimum number of nodes participating in the ML task.
[0176] There is also provided a method at an application entity associated with a first wireless communication network, the method comprising: receiving a request from the second application for supporting the selection of at least one discovered node for performing the ML task evaluating whether the at least one discovered node is capable and/or available of performing the ML task; determining to query another application entity and/or the first application for discovering further nodes for performing the ML task, wherein the further nodes are connected to a second wireless communication network obtaining information on the further nodes and the second wireless communication network providing the selected nodes associated to both the first and second wireless communication networks to the second application. [0177] It should be noted that the method described herein describes a possible implementation, and that the operations and the steps may be rearranged or otherwise modified and that other implementations are possible.
[0178] The description herein is provided to enable a person having ordinary skill in the art to make or use the disclosure. Various modifications to the disclosure will be apparent to a person having ordinary skill in the art, and the generic principles defined herein may be applied to other variations without departing from the scope of the disclosure. Thus, the disclosure is not limited to the examples and designs described herein but is to be accorded the broadest scope consistent with the principles and novel features disclosed herein.

Claims

CLAIMS What is claimed is:
1. A central network entity for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the central network entity to: receive, from a first network entity, a request to support a model operation according to a discovery criterion; determine, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determine, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmit, to the first network entity, a response message comprising the first information and the second information.
2. The central network entity of claim 1 , wherein the discovery criterion comprises at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
3. The central network entity of claim 1 or claim 2, wherein the at least one processor is further configured to cause the central network entity to: receive, from a second network entity, third information corresponding to the first distributed network entity or fourth information corresponding to the second distributed network entity.
4. The central network entity of any one of claims 1 to 3, wherein the at least one processor is further configured to cause the central network entity to: determine a quantity of distributed network entities to support the model operation according to the discovery criterion.
5. The central network entity of claim 4, wherein the quantity of distributed network entities is based on a performance requirement of the model operation.
6. The central network entity of claim 5, wherein the performance requirement of the model operation is based on a quantity of nodes supporting the model operation.
7. The central network entity of any one of claims 1 to 6, wherein the at least one processor is further configured to cause the central network entity to: receive, from the first distributed network entity, an indication of a request for an availability or capability of the second node to support the model operation.
8. The central network entity of claim 7, wherein the at least one processor is further configured to cause the central network entity to: transmit, to the first distributed network entity, an indication of the availability or the capability of the second node to support the model operation.
9. The central network entity of any one of claims 1 to 8, wherein at least one of the first distributed network entity and the second distributed network entity comprises at least one of: an application enablement entity; a service enabler architecture layer, SEAL, entity; an analytics function; an edge platform function; a cloud platform function, and an application function.
10. The central network entity of any one of claims 1 to 9, wherein the at least one processor is further configured to cause the central network entity to: identify the first distributed network entity or the second distributed network entity based on the discovery criterion.
11. The central network entity of claim 10, wherein the at least one processor being configured to cause the central network entity to identify the first distributed network entity or the second distributed network entity comprises the at least one processor being further configured to cause the central network entity to:: obtain an identity of the first distributed network entity or an identify of the second distributed network entity from a machine learning repository.
12. A method performed or performable by a central network entity, the method comprising: receiving, from a first network entity, a request to support a model operation according to a discovery criterion; determining, via a first distributed network entity in a first wireless communication network, first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion; determining, via a second distributed network entity in a second wireless communication network, second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion; and transmitting, to the first network entity, a response message comprising the first information and the second information.
13. The method of claim 12, wherein the discovery criterion comprises at least one of: a service area identifier; an allowed mobile network operator; information for accessing the first wireless communication network; information for accessing the second wireless communication network; information on a serving wireless communication network; information on a preferred wireless communication network; and a combination thereof.
14. A first network entity for wireless communication, comprising: at least one memory; and at least one processor coupled with the at least one memory and configured to cause the first network entity to: transmit, to a central network entity, a request to support a model operation according to a discovery criterion; and receive, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
15. A method performed or performable by a first network entity, the method comprising: transmitting, to a central network entity, a request to support a model operation according to a discovery criterion; and receiving, from the central network entity, a response message comprising first information corresponding to a first node for supporting at least part of the model operation according to the discovery criterion, wherein the response message further comprises second information corresponding to a second node for supporting at least part of the model operation according to the discovery criterion.
PCT/EP2025/056437 2025-02-07 2025-03-10 Supporting a model operation in a wireless communication system Pending WO2025237559A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GR20250100104 2025-02-07
GR20250100104 2025-02-07

Publications (1)

Publication Number Publication Date
WO2025237559A1 true WO2025237559A1 (en) 2025-11-20

Family

ID=94974037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2025/056437 Pending WO2025237559A1 (en) 2025-02-07 2025-03-10 Supporting a model operation in a wireless communication system

Country Status (1)

Country Link
WO (1) WO2025237559A1 (en)

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Functional architecture and information flows for AIML Enablement Service; (Release 19)", vol. SA WG6, no. V19.0.0, 10 January 2025 (2025-01-10), pages 1 - 139, XP052722056, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/23_series/23.482/23482-j00.zip 23482-j00.docx> [retrieved on 20250110] *
"3rd Generation Partnership Project; Technical Specification Group Services and System Aspects; Study on application layer support for AI/ML services; (Release 19)", no. V19.1.0, 27 September 2024 (2024-09-27), pages 1 - 138, XP052650365, Retrieved from the Internet <URL:https://ftp.3gpp.org/Specs/archive/23_series/23.700-82/23700-82-j10.zip 23700-82-j10.docx> [retrieved on 20240927] *

Similar Documents

Publication Publication Date Title
WO2024079365A1 (en) Notification handling for vertical federated learning enablement
WO2024110083A1 (en) Support for machine learning enabled analytics
WO2024156388A1 (en) Registration support for vertical federated learning enablement
WO2024153361A1 (en) Model training in a wireless communication network
WO2024125885A1 (en) Analytics for energy conditions related to background data traffic in a wireless communication network
WO2024088614A1 (en) Analytics and energy efficiency policies applied in a wireless communication network
WO2025237559A1 (en) Supporting a model operation in a wireless communication system
WO2025223730A1 (en) Supporting a model operation in a wireless communication system
WO2025233026A1 (en) Federated learning in a wireless communication system
US20250358726A1 (en) Techniques for indirect network sharing
WO2025256801A1 (en) Apparatuses and methods for federated services in a wireless communication system
WO2025148383A1 (en) Methods and apparatuses for a cloud resource management and orchestration in an open radio access network (o-ran) architecture
WO2025118648A9 (en) User plane function selection
US20250141753A1 (en) Interfacing services of an application data analytics enabler server
US20250056204A1 (en) Apparatus and method for analytics subscription in a wireless network
WO2025060464A1 (en) Method and apparatus of supporting artificial intelligence (ai) applications in wireless communications
WO2025087583A1 (en) Model inference in a wireless communication network
WO2024175226A1 (en) Grouping a plurality of entities for participating in a machine learning operation
WO2024260631A1 (en) Policy creation in a wireless communication network
WO2024260587A1 (en) Access storage and management for federated machine learning
WO2025103649A1 (en) Service discovery in a wireless communication network
WO2025237545A1 (en) Candidate federated learning member updates
WO2024235496A1 (en) Client registration and selection for federated machine learning
WO2025209670A1 (en) Unified data collection architecture for various radio access technologies
WO2025232991A1 (en) Transfer learning in wireless communications network