[go: up one dir, main page]

WO2025123323A1 - Extension de liaison de données d'application et sélection intelligente - Google Patents

Extension de liaison de données d'application et sélection intelligente Download PDF

Info

Publication number
WO2025123323A1
WO2025123323A1 PCT/CN2023/139114 CN2023139114W WO2025123323A1 WO 2025123323 A1 WO2025123323 A1 WO 2025123323A1 CN 2023139114 W CN2023139114 W CN 2023139114W WO 2025123323 A1 WO2025123323 A1 WO 2025123323A1
Authority
WO
WIPO (PCT)
Prior art keywords
network connection
neighbor
network
link
software application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/139114
Other languages
English (en)
Inventor
Houfu Cheng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qualcomm Inc
Original Assignee
Qualcomm Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qualcomm Inc filed Critical Qualcomm Inc
Priority to PCT/CN2023/139114 priority Critical patent/WO2025123323A1/fr
Publication of WO2025123323A1 publication Critical patent/WO2025123323A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/61Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources taking into account QoS or priority requirements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/14Direct-mode setup
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W76/00Connection management
    • H04W76/10Connection setup
    • H04W76/15Setup of multiple wireless link connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W8/00Network data management
    • H04W8/005Discovery of network devices, e.g. terminals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/02Terminal devices
    • H04W88/06Terminal devices adapted for operation in multiple networks or having at least two operational modes, e.g. multi-mode terminals

Definitions

  • LLMs Large Language Models
  • LSMs Large Speech Models
  • VLMs Large Vision Models
  • LLMs Large Vision Models
  • LLMs are generally known for their capabilities in understanding and generating human language.
  • These models may be trained on extensive textual datasets and may perform a broad range of tasks, such as machine translation, text summarization, and question-answering.
  • LLMs have found applications in a broad range of industries including healthcare, finance, and customer service, among others.
  • LSM is a type of LXM specializing in processing and understanding auditory data. LSMs may translate spoken language into textual form and vice versa. LSMs excel at tasks such as speech-to-text conversion, voice recognition, natural language understanding within a spoken context, providing spoken word responses in machine-generated voices. The efficacy of LSMs lies in their capacity to learn from enormous datasets containing diverse accents, dialects, and languages.
  • LVM is a LXM that is trained to interpret and analyze visual data.
  • LVM models may use convolutional neural networks or similar architectures to process visual inputs and derive meaningful conclusions from them. From image classification to object detection and generating new images in response to natural language prompts, LVMs are growing in popularity and use in diverse areas such as medical imaging, autonomous vehicles, surveillance systems, advertising, and entertainment.
  • Various aspects include methods of communicating information that may be performed by a processor of a user computing device.
  • Various aspect methods may include scanning a local network to receive network context information from one or more neighbor devices that include network connections or links to a wide area network (WAN) , using an artificial intelligence (AI) model to classify a software application detected on the user computing device based on one or more network condition parameters associated with the detected software application, analyzing a device network connection or link and at least one neighbor network connection or link of the one or more neighbor devices, selecting a network connection or link for the detected software application based on an output of the AI model and a result of analyzing the device network connection or link and at least one neighbor network connection or link of the one or more neighbor devices, establishing a data transfer link to a neighbor device that includes the selected network connection or link, and using the selected network connection or link and the established data transfer link to the neighbor device to communicate data associated with the detected software application.
  • AI artificial intelligence
  • using the AI model to classify the software application detected on the user computing device based on the one or more network condition parameters associated with the detected software application may include using the AI model to classify the software application detected on the user computing device based on a quality of service (QOS) parameter of the detected software application.
  • QOS quality of service
  • establishing the data transfer link to the neighbor device that may include the selected network connection may include one or more of establishing a universal serial bus (USB) link to the neighbor device, establishing an Ethernet link to the neighbor device, establishing a short-range wireless technology link to the neighbor device, establishing a wireless personal area network connection to the neighbor device, establishing a Wi-Fi link to the neighbor device, establishing a telecommunications link to the neighbor device, or establishing a device-to-device (D2D) link to the neighbor device.
  • USB universal serial bus
  • using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application may include using a mission-critical/ultra-low latency (MCX) link and a tethering link to the neighbor device to communicate the data associated with the detected software application.
  • MCX mission-critical/ultra-low latency
  • using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application may include offloading a data processing or communication task from a primary network connection of the user computing device to a network connection of the neighbor device.
  • using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application may include updating routing information of the user computing device to direct data traffic through the selected network connection.
  • selecting the network connection for the detected software application based on the output of the AI model and the result of analyzing the device network connection and at the least one neighbor network connection of the one or more neighbor devices may include selecting the network connection in response to determining that there may be an operational advantage to offloading data to a network connection of a neighboring device.
  • selecting the network connection for the detected software application may include selecting multiple network connections, and using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application may include simultaneously using the selected multiple network connections to communicate the data associated with the detected software application.
  • simultaneously using the selected multiple network connections to communicate the data associated with the detected software application may include using a first type of network connection of a first neighbor device and a second type of network connection of a second neighbor device, in which the first type of network connection and the second type of network connection are different types of network connections.
  • the first type of network connection may be a Wi-Fi network
  • the second type of network connection may be a telecommunications network
  • Further aspects may include a computing device having a processing system including at least one processor configured with processor-executable instructions to perform various operations corresponding to the methods summarized above. Further aspects may include a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processing system to perform various operations corresponding to the method operations summarized above. Further aspects may include a computing device having means for performing functions corresponding to the method operations summarized above.
  • FIG. 1A is a component block diagram illustrating example components in a system in package (SIP) that may be included in a computing device and configured to implement some embodiments.
  • SIP system in package
  • FIG. 1B is a component block diagram illustrating an example network that includes several user devices connected to the internet via a different network connections or links that are suitable for intelligently selecting a network connection or link in accordance with some embodiments.
  • FIGs. 2A and 2B are component block diagrams illustrating example components that could be included in a system that is configured to perform intelligent network connection or link selection operations in accordance with some embodiments.
  • FIG. 3 is a process flow diagram illustrating a method 300 for intelligently selecting a network connection or link in accordance with some embodiments.
  • FIG. 4 is a component block diagram illustrating an example computing device in the form of a laptop that is suitable for implementing some embodiments.
  • FIG. 5 is a component block diagram illustrating an example wireless communication device suitable for use with various embodiments.
  • FIG. 6 is a component diagram of an example server suitable for implementing some embodiments.
  • Various embodiments include methods, and computing devices and processing systems configured to implement the methods, for using intelligent network connection or link selection to improve the performance and functioning of a communication network and its components.
  • the methods may include scanning (e.g., using Bluetooth Low Energy (BLE) technology, etc.
  • BLE Bluetooth Low Energy
  • computing device is used herein to refer to (but not limited to) any one or all of personal computing devices, personal computers, workstations, laptop computers, Netbooks, Ultrabook, tablet computers, mobile communication devices, smartphones, user equipment (UE) , personal data assistants (PDAs) , palm-top computers, wireless electronic mail receivers, multimedia internet-enabled cellular telephones, media and entertainment systems, gaming systems (e.g., Nintendo ) , media players (e.g., digital versatile disc (DVD) players, Apple ) , digital video recorders (DVRs) , portable projectors, 3D holographic displays, wearable devices (e.g., earbuds, smartwatches, fitness trackers, augmented reality (AR) glasses, head-mounted displays, etc.
  • gaming systems e.g., Nintendo
  • media players e.g., digital versatile disc (DVD) players, Apple
  • DVRs digital video recorders
  • portable projectors e.g., 3D holographic displays
  • wearable devices
  • vehicle systems e.g., drones, automobiles, motorcycles, connected vehicles, electric vehicles, automotive displays, advanced driver-assistance systems (ADAS) , etc.
  • cameras e.g., surveillance cameras, embedded cameras
  • smart devices e.g., smart speakers, smart light bulbs, smartwatches, thermostats, smart glasses, etc.
  • IOT Internet of Things
  • processing system is used herein to refer to one or more processors, including multi-core processors, which are organized and configured to perform various computing functions.
  • Various embodiment methods may be implemented in one or more of multiple processors within a processing system as described herein.
  • SoC system on chip
  • a single SoC may contain circuitry for digital, analog, mixed-signal, and radio-frequency functions.
  • a single SoC may include a processing system that includes any number of general-purpose or specialized processors (e.g., network processors, digital signal processors, modem processors, video processors, etc. ) , memory blocks (e.g., ROM, RAM, Flash, etc. ) , and resources (e.g., timers, voltage regulators, oscillators, etc. ) .
  • general-purpose or specialized processors e.g., network processors, digital signal processors, modem processors, video processors, etc.
  • memory blocks e.g., ROM, RAM, Flash, etc.
  • resources e.g., timers, voltage regulators, oscillators, etc.
  • an SoC may include an applications processor that operates as the SoC’s main processor, central processing unit (CPU) , microprocessor unit (MPU) , arithmetic logic unit (ALU) , etc.
  • An SoC processing system also may include software for controlling integrated resources and processors, as well as for controlling peripheral devices.
  • SIP system in a package
  • a SIP may include a single substrate on which multiple IC chips or semiconductor dies are stacked in a vertical configuration.
  • the SIP may include one or more multi-chip modules (MCMs) on which multiple ICs or semiconductor dies are packaged into a unifying substrate.
  • MCMs multi-chip modules
  • a SIP also may include multiple independent SOCs coupled together via high-speed communication circuitry and packaged in close proximity, such as on a single motherboard, in a single UE, or in a single CPU device. The proximity of the SoCs facilitates high-speed communications and the sharing of memory and resources.
  • LXM large generative AI model
  • LLM large generative AI model
  • An LXM may include multiple layers of neural networks (e.g., RNN, LSTM, transformer, etc. ) with millions or billions of parameters. LXMs support dialogic interactions and encapsulate expansive knowledge in an internal structure.
  • LXMs are capable of providing direct answers and/or are otherwise adept at various tasks, such as text summarization, translation, complex question-answering, conversational agents, etc.
  • LXMs may operate independently as standalone units, may be integrated into more comprehensive systems and/or into other computational units (e.g., those found in a SoC or SIP, etc. ) , and/or may interface with specialized hardware accelerators to improve performance metrics such as latency and throughput.
  • the LXM component may be enhanced with or configured to perform an adaptive algorithm that allows the LXM to better understand context information and dynamic user behavior.
  • the adaptive algorithms may be performed by the same processing system that manages the core functionality of the LXM and/or may be distributed across multiple independent processing systems.
  • LXMs include but not limited to LLMs, LSMs, LVMs, and VLMs
  • LLMs are generally known for their capabilities in understanding and generating human language. These models may be trained on extensive textual datasets and may perform a broad range of tasks, such as machine translation, text summarization, and question-answering. LLMs have found applications in a broad range of industries including healthcare, finance, and customer service, among others.
  • LSM is a type of LXM specializing in processing and understanding auditory data. LSMs may translate spoken language into textual form and vice versa. LSMs excel at tasks such as speech-to-text conversion, voice recognition, natural language understanding within a spoken context, providing spoken word responses in machine-generated voices. The efficacy of LSMs lies in their capacity to learn from enormous datasets containing diverse accents, dialects, and languages.
  • LVM is a LXM that is trained to interpret and analyze visual data.
  • LVM models may use convolutional neural networks or similar architectures to process visual inputs and derive meaningful conclusions from them. From image classification to object detection and generating new images in response to natural language prompts, LVMs are growing in popularity and use in diverse areas such as medical imaging, autonomous vehicles, surveillance systems, advertising, and entertainment.
  • neural network is used herein to refer to an interconnected group of processing nodes (or neuron models) that collectively operate as a software application or process that controls a function of a computing device and/or generates an overall inference result as output.
  • Individual nodes in a neural network may attempt to emulate biological neurons by receiving input data, performing simple operations on the input data to generate output data, and passing the output data (also called “activation” ) to the next node in the network.
  • Each node may be associated with a weight value that defines or governs the relationship between input data and output data.
  • a neural network may learn to perform new tasks over time by adjusting these weight values.
  • the overall structure of the neural network and/or the operations of the processing nodes do not change as the neural network learns a task. Rather, learning is accomplished during a “training” process in which the values of the weights in each layer are determined.
  • the training process may include causing the neural network to process a task for which an expected/desired output is known, comparing the activations generated by the neural network to the expected/desired output, and determining the values of the weights in each layer based on the comparison results.
  • the neural network may begin “inference” to process a new task with the determined weights.
  • Inference is used herein to refer to a process that is performed at runtime or during the execution of the software application program corresponding to the neural network. Inference may include traversing the processing nodes in the neural network along a forward path to produce one or more values as an overall activation or overall “inference result. ”
  • Deep neural networks implement a layered architecture in which the activation of a first layer of nodes becomes an input to a second layer of nodes, the activation of a second layer of nodes becomes an input to a third layer of nodes, and so on.
  • computations in a deep neural network may be distributed over a population of processing nodes that make up a computational chain.
  • Deep neural networks may also include activation functions and sub-functions (e.g., a rectified linear unit that cuts off activations below zero, etc. ) between the layers.
  • the first layer of nodes of a deep neural network may be referred to as an input layer.
  • the final layer of nodes may be referred to as an output layer.
  • the layers in-between the input and final layer may be referred to as intermediate layers, hidden layers, or black-box layers.
  • Each layer in a neural network may have multiple inputs and thus multiple previous or preceding layers. Said another way, multiple layers may feed into a single layer.
  • some of the embodiments are described with reference to a single input or single preceding layer. However, the operations disclosed and described in this application may be applied to each of multiple inputs to a layer and multiple preceding layers.
  • RNN recurrent neural network
  • RNNs may include cycles or loops within the network that allow information to persist. This enables RNNs to maintain a “memory” of previous inputs in the sequence, which may be beneficial for tasks in which temporal dynamics and the context in which data appears are relevant.
  • LSTM long short-term memory network
  • RNN long short-term memory network
  • LSTMs include a more complex recurrent unit that allows for the easier flow of gradients during backpropagation. This facilitates the model’s ability to learn from long sequences and remember over extended periods, making it apt for tasks such as language modeling, machine translation, and other sequence-to-sequence tasks.
  • the term “transformer” is used herein to refer to a specific type of neural network that includes an encoder and/or a decoder and is particularly well-suited for sequence data processing.
  • Transformers may use multiple self-attention components to process input data in parallel rather than sequentially.
  • the self-attention components may be configured to weigh different parts of an input sequence when producing an output sequence. Unlike solutions that focus on the relationship between elements in two different sequences, self-attention components may operate on a single input sequence.
  • the self-attention components may compute a weighted sum of all positions in the input sequence for each position, which may allow the model to consider other parts of the sequence when encoding each element. This may offer advantages in tasks that benefit from understanding the contextual relationships between elements in a sequence, such as sentence completion, translation, and summarization.
  • the weights may be learned during the training phase, allowing the model to focus on the most contextually relevant parts of the input for the task at hand.
  • Transformers with their specialized architecture for handling sequence data and their capacity for parallel computation, often serve as foundational elements in constructing large generative AI models (LXM) .
  • embeddding layer is used herein to refer to a specialized layer within a neural network, typically at the input stage, which transforms discrete categorical values or tokens into continuous, high-dimensional vectors.
  • An embedding layer may operate as a lookup table in which each unique token or category is mapped to a point in a continuous vector space.
  • the vectors may be refined during the model’s training phase to encapsulate the characteristics or attributes of the tokens in a manner that is conducive to the tasks the model is configured to perform.
  • each token may represent any of a variety of different data types.
  • each token may represent one or more textual elements, such as a paragraph (s) , sentence (s) , clause (s) , word (s) , sub-word (s) , character (s) , etc.
  • each token may represent a feature extracted from audio signals, such as a phoneme, spectrogram, temporal dependency, Mel-frequency cepstral coefficients (MFCCs) that represent small segments of an audio waveform, etc.
  • MFCCs Mel-frequency cepstral coefficients
  • each token may correspond to a portion of an image (e.g., pixel blocks) , sequences of video frames, etc.
  • each token may be a complex data structure that encapsulates information from various sources.
  • a token may include both textual and visual information, each of which independently contributes to the token’s overall representation in the model.
  • Each token may be converted into a numerical vector via the embedding layer.
  • Each vector component e.g., numerical value, parameter, etc.
  • the vector components may encode an attribute, quality, or characteristic of the original token.
  • the vector components may be adjustable parameters that are iteratively refined during the model training phase to improve the model’s performance during subsequent operational phases.
  • the numerical vectors may be high-dimensional space vectors (e.g., containing more than 300 dimensions, etc. ) in which each dimension in the vector captures a unique attribute, quality, or characteristic of the token.
  • dimension 1 of the numerical vector may encode the frequency of a word’s occurrence in a corpus of data
  • dimension 2 may represent the pitch or intensity of the sound of the word at its utterance
  • dimension 3 may represent the sentiment value of the word, etc.
  • Such intricate representation in high-dimensional space may help the LXM understand the semantic and syntactic subtleties of its inputs.
  • the tokens may be processed sequentially through layers of the LXM or neural network, which may include structures or networks appropriate for sequence data processing, such as transformer architectures, recurrent neural networks (RNNs) , or long short-term memory networks (LSTMs) .
  • RNNs recurrent neural networks
  • LSTMs long short-term memory networks
  • sequence data processing is used herein to refer to techniques or technologies for handling ordered sets of tokens in a manner that preserves their original sequential relationships and captures dependencies between various elements within the sequence.
  • the resulting output may be a probabilistic distribution or a set of probability values, each corresponding to a “possible succeeding token” in the existing sequence.
  • the LXM may suggest the possible succeeding token determined to have the highest probability of completing the text sequence.
  • the LXM may choose the token with the highest determined probability value to augment the existing sequence, which may subsequently be fed back into the model for further text production.
  • LXMs Large Generative AI Models
  • LLMs Large Language Models
  • Modern networks may include multiple different types of devices (e.g., laptops, smartphones, TVs, etc. ) that each connected to the internet via a different network connection or link (e.g., Wi-Fi, cellular or telecommunications connections, IP, Ethernet, private networks provided by TV services, etc. ) .
  • a different network connection or link e.g., Wi-Fi, cellular or telecommunications connections, IP, Ethernet, private networks provided by TV services, etc.
  • an individual may use multiple internet-connected devices (e.g., phones, laptops, watches, TVs, and earbuds, etc. ) that connect to the internet through different network connections or links and technologies (e.g., mobile internet via carrier networks for phones, Wi-Fi, IP or Ethernet for laptops, a private network for TVs, etc. ) .
  • Each device may include its own network requirements and distinct network connection or links.
  • each device may include a different collection of software applications that each have their own application-specific quality of service (QoS) requirements, characteristics, and/or metrics.
  • QoS quality of service
  • a single fixed network connection or link may not be able to adequately meet the quality-of-service requirements of all these applications.
  • conventional solutions that use a single fixed network connection or link may degrade the user experience due to inadequate network bandwidth allocations, excessive end-to-end network latency, and/or other similar consequences.
  • Conventional network solutions also often use a centralized hub (e.g., a router, modem, etc. ) to manage connectivity in such networks, which may further degrade the user experience and/or introduce additional constraints or limitations.
  • Some embodiments may include components configured to improve the user experience in environments in which individuals use multiple devices that are each linked to the internet through various different network connection or links.
  • the components may be configured to cooperate with neighboring devices and use advanced artificial intelligence (AI) techniques to intelligently select network connections or links that are tailored to the specific requirements of the different software applications operating on the devices based on any or all of information received from neighbor device (s) , network context information, types of network connections or links, network connection or link qualities, performance parameters of neighboring devices, etc.
  • the components may allow any device in the network to use one or more network connections or links of one or more other neighboring devices in the network to communicate information.
  • the components may be configured to establish and use direct communication links between different types of devices in the network to receive network information.
  • the components may use this information to determine, generate, approximate, or predict the network context information, use the network context information to make inferences, and select a network connection or link for an application based on the inferences. That is, the components may use the network context information to obtain a detailed real-time understanding of the current network conditions and application requirements and use this information to select the most appropriate network connection or communication link.
  • the network context information may include, but is not limited to, information that characterizes the requirements and performance metrics of each application, supported types of network connection or link, setup parameters for each network connection or link type, performance parameters for each network connection or link type, network performance data, the types of network connections or links available (e.g., low latency links, high bandwidth links, test links, etc. ) , the performance characteristics of the available network connection or links, performance characteristics associated with available types of network connections or links, etc.
  • the components may be configured to perform network offloading operations that include transferring data processing or communication tasks from a device’s primary network connection to another network connection that is determined to be better suited for the telecommunications task.
  • a device connected to the internet via Wi-Fi may offload communication tasks to another smartphone that is connected to the internet via telecommunication links and is close enough for a short-range wireless connection (e.g., or Wi-Fi) to establish a direct device-to-device wireless link.
  • the components may be configured to route traffic through such a neighboring (i.e., within or Wi-Fi range) devices associated with the selected connections or links.
  • the laptop may select the network connection or link and route its traffic through a neighboring smartphone that includes a cellular (e.g., 5G) telecommunications connection or link.
  • the components may be configured to create a self-organizing network of devices.
  • a smartphone, a laptop, and a smart TV may form a self-organized network and create a collaborative network environment to share network context information with each other.
  • Each device may be aware of its neighboring devices and their respective network connection or link capabilities. That is, each device within the network may be aware of the link characteristics of all other devices within the network, including their connections to the internet. As such, every device in the network may have a comprehensive understanding of the available network options within its immediate network environment. The devices may use this information to make informed decisions regarding network connectivity, connection or link selection, application performance optimization, etc.
  • the components may be configured to select the network connection or link based on the needs of their currently executing applications (e.g., bandwidth needs, latency sensitivity, reliability requirements, etc. ) .
  • the components may be configured to facilitate application-specific network selection. For example, a laptop running a time-sensitive application may select the most suitable network connection or link based on a result of analyzing the network connections or links available from its neighboring devices and the specific requirements of the time-sensitive application.
  • the laptop may select an ultra-reliable low latency network provided by the smartphone (or the phone’s low latency network slice, etc. ) for the time-sensitive application and/or use the smartphone as a hotspot.
  • the components may be configured to dynamically route traffic so that the device bypasses its direct network connection or link in favor of a more optimal route through other neighboring devices. For example, instead of a laptop connecting to the internet directly through its base station, it may route its traffic through the TV, which may in turn connect to the internet through a private network. This indirect routing may be based on the assessment that the TV’s network connection or link offers a better fit for the laptop’s application needs at that moment.
  • the components may be configured to use multiple network connections or links simultaneously and/or to combine the bandwidth of multiple network connection or links.
  • a laptop with access to multiple network connections or links e.g., Wi-Fi, Ethernet, etc.
  • the laptop may simultaneously use the Wi-Fi type of network connection of a tablet and the cellular network type of network connection of a smartphone to ensure robust and high-speed internet connectivity for a specific software application (e.g., a document download application, etc. ) .
  • the components may be configured to simultaneously use multiple different network connections or links by dividing data traffic into different streams that may be routed through different network connections or links and paths to the internet.
  • the components may be configured to allow for application-specific network stream selection. For example, a first application on the device may use a single stream for its data transfer and a second application on the device may use two or more streams for its data transfer. Each application may tailor its network usage based on its specific requirements.
  • the components may be configured to overcome the complexities and technical challenges associated with multi-link data management. For example, there are technical challenges associated with reassembling packets that arrive out of sequence when using multiple network connections or links for data transfer.
  • the components may be configured to manage these packets effectively to maintain data integrity and ensure the proper functioning of the applications.
  • FIG. 1A illustrates an example computing system or SIP 100 architecture that may be used in user end devices implementing the various embodiments.
  • the illustrated example SIP 100 includes two SOCs 102, 104, a clock 106, a voltage regulator 108, a wireless transceiver 166, and user input devices 170 (e.g., a touch-sensitive display, a touch pad, a mouse, etc. ) .
  • the first and second SOC 102, 104 may communicate via interconnection bus 150.
  • Various processors 110, 112, 114, 116, 118, 121, 122 may be interconnected to each other and to one or more memory elements 120, system components and resources 124, and a thermal management unit 132 via an interconnection bus 126, which may include advanced interconnects such as high-performance networks-on-chip (NOCs) .
  • NOCs high-performance networks-on-chip
  • the processor 152 may be interconnected to the power management unit 154, the mmWave transceivers 156, memory 158, and various additional processors 160 via the interconnection bus 164.
  • These interconnection buses 126, 150, 164 may include an array of reconfigurable logic gates and/or implement a bus architecture (e.g., CoreConnect, AMBA, etc. ) . Communications may be provided by advanced interconnects, such as NOCs.
  • any or all of the processors 110, 112, 114, 116, 121, 122, in the system may operate as the SoC’s main processor, central processing unit (CPU) , microprocessor unit (MPU) , arithmetic logic unit (ALU) , etc.
  • One or more of the coprocessors 118 may operate as the CPU.
  • the first SOC 102 may operate as the central processing unit (CPU) of the mobile computing device that carries out the instructions of software application programs by performing the arithmetic, logical, control and input/output (I/O) operations specified by the instructions.
  • the second SOC 104 may operate as a specialized processing unit.
  • the second SOC 104 may operate as a specialized 5G processing unit responsible for managing high volume, high speed (e.g., 5 Gbps, etc. ) , and/or very high-frequency short wavelength (e.g., 28 GHz mmWave spectrum, etc. ) communications.
  • the first SOC 102 may include a digital signal processor (DSP) 110, a modem processor 112, a graphics processor 114, an applications processor 116, one or more coprocessors 118 (e.g., vector co-processor, CPUCP, etc. ) connected to one or more of the processors, memory 120, data processing unit (DPU) 121, artificial intelligence processor 122, system components and resources 124, an interconnection bus 126, one or more temperature sensors 130, a thermal management unit 132, and a thermal power envelope (TPE) component 134.
  • DSP digital signal processor
  • modem processor 112 e.g., a graphics processor
  • applications processor 116 e.g., vector co-processor, CPUCP, etc.
  • coprocessors 118 e.g., vector co-processor, CPUCP, etc.
  • the second SOC 104 may include a 5G modem processor 152, a power management unit 154, an interconnection bus 164, a plurality of mmWave transceivers 156, memory 158, and various additional processors 160, such as an applications processor, packet processor, etc.
  • Each processor 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, 160 may include one or more cores, and each processor/core may perform operations independent of the other processors/cores.
  • the first SOC 102 may include a processor that executes a first type of operating system (e.g., FreeBSD, LINUX, OS X, etc. ) and a processor that executes a second type of operating system (e.g., MICROSOFT WINDOWS 11) .
  • a first type of operating system e.g., FreeBSD, LINUX, OS X, etc.
  • a second type of operating system e.g., MICROSOFT WINDOWS 11
  • processors 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, 160 may be included as part of a processor cluster architecture (e.g., a synchronous processor cluster architecture, an asynchronous or heterogeneous processor cluster architecture, etc. ) .
  • a processor cluster architecture e.g., a synchronous processor cluster architecture, an asynchronous or heterogeneous processor cluster architecture, etc.
  • processors 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, 160 may operate as the CPU of the mobile computing device.
  • any, or all of the processors 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, 160 may be included as one or more nodes in one or more CPU clusters.
  • a CPU cluster may be a group of interconnected nodes (e.g., processing cores, processors, SOCs, SIPs, computing devices, etc. ) configured to work in a coordinated manner to perform a computing task.
  • Each node may run its own operating system and contain its own CPU, memory, and storage.
  • a task that is assigned to the CPU cluster may be divided into smaller tasks that are distributed across the individual nodes for processing.
  • the nodes may work together to complete the task, with each node handling a portion of the computation.
  • the results of each node’s computation may be combined to produce a final result.
  • CPU clusters are especially useful for tasks that can be parallelized and executed simultaneously. This allows CPU clusters to complete tasks much faster than a single, high-performance computer. Additionally, because CPU clusters are made up of multiple nodes, they are often more reliable and less prone to failure than a single high-performance component.
  • the first and second SOC 102, 104 may include various system components, resources, and custom circuitry for managing sensor data, analog-to-digital conversions, wireless data transmissions, and for performing other specialized operations, such as decoding data packets and processing encoded audio and video signals for rendering in a web browser.
  • the system components and resources 124 of the first SOC 102 may include power amplifiers, voltage regulators, oscillators, phase-locked loops, peripheral bridges, data controllers, memory controllers, system controllers, Access ports, timers, and other similar components used to support the processors and software clients running on a computing device.
  • the system components and resources 124 may also include circuitry to interface with peripheral devices, such as cameras, electronic displays, wireless communication devices, external memory chips, etc.
  • the first and/or second SOCs 102, 104 may further include an input/output module (not illustrated) for communicating with resources external to the SOC, such as the clock 106, the voltage regulator 108, the wireless transceiver 166 (e.g., cellular wireless transceiver, Bluetooth transceiver, etc. ) , the user input devices 170 (e.g., a touch-sensitive display, a touch pad, a mouse, etc. ) .
  • Resources external to the SOC e.g., clock 106, voltage regulator 108, wireless transceiver 166) may be shared by two or more of the internal SOC processors/cores.
  • the software responsible for intelligent link selection and data management may be implemented in the applications processor (AP) 116.
  • the AP 116 may analyze link information and communicate with the other devices to make informed decisions about which network connection or link to use.
  • the AP 116 may also control the signaling (e.g., through the modem, etc. ) , which may include directing data traffic through the selected link and managing the flow of data through one or more paths.
  • the AP 116 in each device may determine its bandwidth requirements, select the network connection or link that best supports the application, and route the corresponding data traffic through the neighbor device that includes the selected network connection or link.
  • Each application has access to the most suitable network resources for its specific needs, leading to enhanced overall performance and user experience.
  • various embodiments may be implemented in various computing systems, including a single processor, multiple processors, multicore processors, or any combination thereof.
  • FIG. 1B illustrates an example network 151 that includes many user computing devices 103, 105, 107, 109, 111, 113, 115 that connect to the internet 121 via a different network connections or links 131, 133, 135, 137.
  • the illustrated network 151 includes a media player 103, laptop 105, tablet 107, smartphone 109, smartwatch 111, smart speaker 113, personal computer 115, any or all of which may include the SIP 100 discussed above.
  • the media player 103 includes a network connection or link 131 to the internet 121 via a private network 123 (e.g., provided by TV services, etc. ) .
  • the media player 103, laptop 105, tablet 107, smartphone 109, smartwatch 111, and smart speaker 113 include network connections or links 131 to the internet 121 through Wi-Fi and an access point 117.
  • the personal computer 115 includes a network connection or link 135 to the internet 121 through an IP/Ethernet 119 component.
  • the smartphone 109 and smartwatch 111 include a network connection or link 137 to the internet 121 through cellular connections and a base station 141.
  • Each user computing device 103, 105, 107, 109, 111, 113, 115 may include its own network requirements and distinct network connection or links.
  • each user computing device 103, 105, 107, 109, 111, 113, 115 may include a different collection of software applications that each have their own application-specific requirements (e.g., QoS requirements, etc. ) characteristics, and/or metrics.
  • any of the user computing devices 103, 105, 107, 109, 111, 113, 115 may include direct communication links to any or all of the other user computing devices 103, 105, 107, 109, 111, 113, 115.
  • the user computing devices 103, 105, 107, 109, 111, 113, 115 may be configured to support various wired and wireless protocols for detecting and relaying information between devices, including but not limited to universal serial bus (USB) , IP/Ethernet, Bluetooth, Bluetooth Low Energy (BLE) , Wi-Fi, telecommunications (e.g., LTE, 5G, 6G, etc. ) , device-to-device (D2D) links, etc.
  • USB universal serial bus
  • IP/Ethernet IP/Ethernet
  • Bluetooth Bluetooth Low Energy
  • Wi-Fi telecommunications
  • telecommunications e.g., LTE, 5G, 6G, etc.
  • D2D device-to-device
  • the user computing devices 103, 105, 107, 109, 111, 113, 115 may be configured to share their network connection or link characteristics (e.g., network connection or link quality, bandwidth, latency, etc. ) with the other user computing devices 103, 105, 107, 109, 111, 113, 115.
  • the user computing devices 103, 105, 107, 109, 111, 113, 115 may be configured to process signals from the other user computing devices 103, 105, 107, 109, 111, 113, 115 to setup a neighbor link and/or manage the data traffic as directed.
  • the user computing devices 103, 105, 107, 109, 111, 113, 115 may be configured to handle data offloading, which may include receiving and/or transmitting data on behalf of another user computing device 103, 105, 107, 109, 111, 113, 115.
  • the user computing devices 103, 105, 107, 109, 111, 113, 115 may be configured to send, receive, implement, enforce, and/or use offloading rules and/or custom routing rules that are specific to a software application.
  • the custom routing rules may be used to determine how incoming and outgoing packets for the application should be managed.
  • the offloading rules may include information that may be used to the device to receive and manage data traffic when a neighbor device decides to offload data through the device’s network connection or link.
  • FIGs. 2A and 2B illustrate components that could be included in a system configured to perform intelligent network connection or link selection operations in accordance with some embodiments.
  • a system may include a user device 201 and a neighbor device 203, any or all of which may be user computing devices 103, 105, 107, 109, 111, 113, 115 and/or include the SIP 100 discussed above.
  • the user device 201 may include time-sensitive applications 202, background high bandwidth (High-BW) applications 204, interactive high bandwidth applications 206, a network context information 210 component, an application classification model 212 component, a network connection or link arbitrator 214 component, a neighboring link setup 216 component, data offload switcher 218 component, a Bluetooth/Wi-Fi protocol stack 220, a Bluetooth/Wi-Fi device driver 222, a TCP/IP protocol stack 224, a network card driver 226, and a socket application programming interface (API) 230 component.
  • the neighbor device 203 may share connectivity characteristics with the user device 201 and act as a potential source of alternative network connection or links.
  • the components 210-218 may be configured to select the best available network connection or link for data traffic based on the link’s performance characteristics and the specific requirements of each of the applications 202-206.
  • the components 210-218 may be configured to share connectivity characteristics with neighboring devices 203. That is, neighboring devices 203 with different internet connectivity may share their connectivity characteristics (e.g., latency, bandwidth, etc. ) with the user device 201 and each other.
  • the components 210-218 may be configured to arbitrate and select the best link so as to eliminate the need for a centralized hub device.
  • the components 210-218 may be configured to enable routing through neighbor devices 203. That is, the components 210-218 determine and select the best link based on the characteristics of the network connection or link and the characteristics of the device associated with the network connection or link. Unlike conventional network systems in which each device connects directly to a centralized hub, the components 210-218 may use direct communication links to route communications through a neighbor device 203.
  • the network context information 210 component may include an advertiser component and a scanner component.
  • the advertiser component may be configured to broadcast its network context information (e.g., connectivity quality and type, etc. ) .
  • the scanner component may be configured to actively scan and collect network context information from the neighboring devices 203.
  • the network context information 210 component may be configured to use the Bluetooth/Wi-Fi protocol stack 220 and the Bluetooth/Wi-Fi device driver 222 to scan for neighbor devices in proximity the user computing device, receive and store network context information of the neighbor devices, and/or determine neighbor device network context information.
  • the network context information may include, but is not limited to, network performance data, information characterizing the requirements and/or performance metrics of the applications 202-206, detailed information about the types of network connections or links that are available (e.g., low latency links, high bandwidth links, test links, etc.
  • connectivity information information identifying the performance characteristics of the available network connection or links, the performance characteristics the different types of network connections or links, a list of the supported types of network connections or links, the available or detected types of network connections or links, link identifiers (link IDs) , setup parameters for each network connection or link type, network connection or link qualities, performance parameters of neighboring devices, hardware specifications neighboring devices, software version information, security protocols in use, service set identifiers (SSID) of connected networks, internet protocol (IP) addresses, media access control (MAC) addresses, signal strength information, connection status, and any other relevant network configuration details that may influence the selection or enhancements of network connections.
  • IP internet protocol
  • MAC media access control
  • the application classification model 212 may include or use an artificial intelligence (AI) model that is configured to perform application classification operations, which may include determining the application type of each of the applications 202-206 and matching the best network connection or link based the application type.
  • AI artificial intelligence
  • the user device 201 may be configured to use a lookup table to perform basic matching.
  • the user device 201 may be configured to AI model component for a more sophisticated and dynamic matching based on various parameters and evolving conditions to dynamically adapt to changing network environments and application requirements.
  • the application classification model 212 may include or use multiple parameters or information elements (IEs) , such as a running status IE, a target network type IE, and bandwidth and latency requirements IE.
  • the running status IE may identify whether the application is running in the foreground or background.
  • the target network type IE may include information suitable for determining whether the application’s target network is public or private (which may influence the choice of network connection or link based on security, speed, reliability, etc. ) .
  • the bandwidth and latency requirement IE may include information for analyzing the bandwidth and latency needs of the application to match it with a suitable network connection or link.
  • the application classification model 212 may be configured to receive and use input information to generate output that identifies an application type that may be used for network connection or link selection.
  • the input information may include an application history network package IE, application package IE, and application process status IE.
  • the application history network packages IE may include information for analyzing historical network usage patterns to understand typical demands.
  • the application package information may include various details (e.g., package name, permissions, functions, etc. ) that may be used to characterize the application and its network usage.
  • the application process status information may include the current status of the application process and/or its immediate network needs.
  • the output information may identify an application type and/or other information suitable for determining the most suitable network connection or link for the application given its specific requirements and usage patterns.
  • the application classification model 212 may be configured to analyze the current operations of applications 202-206, determine their QOS metrics (e.g., using historical data, metadata and attributes associated with the software package, etc. ) , and determine each application’s specific network requirements based on the determined metrics. For example, the application classification model 212 may analyze various parameters such as network bandwidth usage, latency requirements, data sensitivity, and priority levels of the applications. In some embodiments, the analysis may include analyzing real-time data traffic patterns, historical performance metrics, and the intrinsic characteristics of the applications (determined based on metadata and attributes associated with the software package, etc. ) .
  • the application classification model 212 may be configured to categorize applications into different classes based on their network requirements and behaviors. Examples of such classes include, but are not limited to, high-bandwidth and low-latency applications (e.g., video streaming services, etc. ) , moderate-bandwidth applications (e.g., web browsing, etc. ) , and low-bandwidth high-latency-tolerant applications (e.g., email, text messaging, etc. ) .
  • the application classification model 212 may be configured to predict future network requirements of the applications 202-206 based on usage patterns and adjust the network configuration proactively to maintain or improve performance.
  • the proactive network configuration adjustments may include dynamically allocating bandwidth, prioritizing network traffic, and suggesting alternative network connections or links or connections to enhance the overall user experience and/or the efficiency of network resource utilization.
  • the network connection or link arbitrator 214 component may be configured to receive and use the network context information to select the best network connection or link for a given application 202-206.
  • the network connection or link arbitrator 214 component may receive as input information about all available network connections or links and the current type of the application in use, analyze the available network connection or links, and select the network connection or link that best aligns with the application’s requirements (e.g., considering factors such as bandwidth, latency, reliability, etc. ) .
  • the input to the network connection or link arbitrator 214 component may include, but is not limited to, the determined application type (e.g., high-bandwidth, moderate-bandwidth, low-bandwidth, low-latency, high-latency-tolerant, etc.
  • the network connection or link arbitrator 214 may be configured to use any or all of such input to analyze and/or select a network connection or link.
  • the network connection or link arbitrator 214 component may be configured to receive network context information, determine the available links, filter the available links based on whether the associated neighbor device is available (and includes available resources, etc. ) , based on whether the links meet the requirements (e.g., bandwidth, latency, etc., ) of the application, and/or based on whether the links include a data traffic quota or may accommodate the anticipated volume of data traffic.
  • the network connection or link arbitrator 214 component may be configured to generate as output the link identifier value (link ID) of the selected link and send the generated output to the neighboring link setup 216 component.
  • the neighboring link setup 216 component and the data offload switcher 218 component may be configured to receive and use such information to establish and manage the selected data link (which may be identified via the link ID) .
  • neighboring link setup 216 component may be configured to establish the network connection or link selected by the network connection or link arbitrator 214.
  • the data offload switcher 218 component may be configured to manage the transition from the current data link of the application 202-206 to the newly selected network connection or link.
  • the network context information 210 component may to scan for neighbor devices 203 in proximity the user computing device 201, receive and store network context information of the neighbor devices, and/or determine neighbor device network context information. That is, the user computing device 201 may become aware of the network contexts of its neighbor devices 203 in operation 1.
  • the user 250 may launch a software application, such as the illustrated time-sensitive application 202.
  • the application classification model 212 may detect currently executing application operations, determine the application is a time-sensitive application 202, and determine the network QoS requirements of the time-sensitive application 202.
  • the network connection or link arbitrator 214 component may receive all available network connection or links, including the local network connections or links of the user computing device 201 and the network connections or links of the neighbor device 203, from the network context information 210 component.
  • the network connection or link arbitrator 214 component may receive the application type information from the application classification model 212.
  • the network connection or link arbitrator 214 component may determine and select the best link and send information identifying the selected network connection or link to the neighboring link setup 216 component.
  • the neighboring link setup 216 component may setup the selected link and send a “neighbor link setup success” message to the data offload switcher 218 component.
  • the offload switcher 218 component may update the route information of the user computing device 201 for the time-sensitive application 202.
  • the time-sensitive application 202 may use the selected network connections or links to transport data.
  • FIG. 3 is a process flow diagram illustrating a method 300 for intelligently selecting a network connection or link in accordance with some embodiments.
  • the method 300 may be performed in a computing device by at least one processor encompassing one or more processors (e.g., 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, etc. ) , components (e.g., 210-218, etc. ) or subsystems discussed in this application.
  • processors e.g., 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, etc.
  • components e.g., 210-218, etc.
  • Means for performing the functions of the operations in the method 300 may include at least one processor including one or more of processors 110, 112, 114, 116, 118, 121, 122, 121, 122, 152, and other components (e.g., 210-218, etc. ) described herein. Further, one or more processors of a at least one processor may be configured with software or firmware to perform some or all of the operations of the method 300. In order to encompass the alternative configurations enabled in various embodiments, the hardware implementing any or all of the method 300 is referred to herein as “at least one processor. ”
  • the at least one processor may scan a local network to receive network context information from one or more neighbor devices that include network connections or links to a wide area network (WAN) .
  • the processor may use the Bluetooth/Wi-Fi protocol stack 220 to actively search for nearby devices connected to different types of WAN links (e.g., cellular, fiber optic, or satellite connections, etc. ) .
  • the processor may collect, aggregate, and use various different types of data (e.g., link type, bandwidth, latency, signal strength, current load, etc. ) to generate or update the network context information that it is suitable for use in assessing the availability and suitability of the WAN links for various applications running on the user device.
  • the at least one processor may use an AI model to classify a software application detected on the user computing device based on one or more network condition parameters (e.g., QoS requirements, etc. ) associated with the detected software application.
  • the processor may use the AI model to assess a video conferencing application and classify it as a high-bandwidth, low-latency application due to its real-time communication nature and high data throughput needs.
  • the processor may use the AI model to categorize an email client as a low-bandwidth, high-latency-tolerant application due to its minimal real-time data transfer requirements.
  • This classification may allow the system to prioritize network resources and optimize link selection according to the specific demands of each application, ensuring that applications with more critical or sensitive network needs, such as video streaming or online gaming, receive priority in network connection or link allocation over less demanding applications, like file downloads or background updates.
  • the at least one processor may analyze or evaluate a device network connection or link and at least one neighbor network connection or link of the one or more neighbor devices. For example, the processor may assess the bandwidth capacity, latency characteristics, reliability, and current usage levels of the network connection or link of the user device (e.g., a home Wi-Fi connection, etc. ) and the network connections or links of nearby devices (e.g., a neighboring device’s cellular data connection, etc. ) . The processor may compare these factors to determine the network connection or link is best suited for the current needs of the software applications operating on the user device. If the user device is engaged in a high-bandwidth activity (e.g., streaming 4K video, etc.
  • a high-bandwidth activity e.g., streaming 4K video, etc.
  • the processor may opt for the faster and less congested cellular link.
  • the processor may be configured to determine and select the most efficient and effective network connection or link available.
  • the processor may be configured to evaluate the immediate performance of the network connections or links and also consider future workloads, stability, potential future changes in network conditions, and other conditions and communication parameters.
  • the at least one processor may select a network connection or link for the detected software application based on an output of the AI model and a result of evaluating the device network connection or link and at least one neighbor network connection or link of the one or more neighbor devices.
  • the at least one processor may determine whether there is an operational advantage to offloading data to a link of the neighbor device and select the network connection or link in response to determining that there is an operational advantage to offloading the data to a link of the neighbor device.
  • the at least one processor may select multiple network connections or links in block 308.
  • the processor may divide the data transfer across multiple network connections or links for a data-intensive application (e.g., a cloud backup, a cloud-based gaming service, high-definition video streaming, etc. ) .
  • a data-intensive application e.g., a cloud backup, a cloud-based gaming service, high-definition video streaming, etc.
  • the processor may simultaneously use the primary Wi-Fi network of the use device and an underutilized Ethernet connection of a neighbor device.
  • the simultaneous use of multiple links may significantly enhance data transfer speeds and efficiency.
  • the processor may be configured to select the network connection or link based on the needs (e.g., bandwidth needs, latency sensitivity, reliability requirements, etc. ) of its currently executing applications.
  • the processor may be configured to facilitate application-specific network selection. For example, a laptop running a time-sensitive application may select the most suitable network connection or link based on a result of analyzing the network connections or links available from its neighboring devices and the specific requirements of the time-sensitive application.
  • the laptop may select an ultra-reliable low latency network provided by the smartphone (or the phone’s low latency network slice, etc. ) for the time-sensitive application and/or use the smartphone as a hotspot.
  • the processor may allocate different types of data traffic to different links and/or different neighbor devices based on the characteristics of the data traffic, links and/or devices. For example, the processor may send real-time gaming traffic over the lower latency Ethernet link and use the Wi-Fi connection for background downloads or updates.
  • the at least one processor may establish a data transfer link to a neighbor device that includes the selected network connection or link.
  • the data transfer link may be USB link, an Ethernet link, a short-range wireless technology link (e.g., a Bluetooth link, etc. ) , a wireless personal area network connection or link (e.g., a BLE link, etc. ) , a Wi-Fi link, a telecommunications link (e.g., 5G link, etc. ) , a mission-critical/ultra-low latency (MCX) link, and/or a device-to-device (D2D) link.
  • a short-range wireless technology link e.g., a Bluetooth link, etc.
  • a wireless personal area network connection or link e.g., a BLE link, etc.
  • Wi-Fi link e.g., a telecommunications link
  • MCX mission-critical/ultra-low latency
  • D2D device-to-device
  • the processor may establish a D2D link that uses the advanced capabilities (e.g., high data rates, low latency, etc. ) of 5G technologies in response to selecting a high-speed, low-latency telecommunications link offered by the neighbor device.
  • This type of connection would be particularly advantageous for applications requiring real-time data transfer, such as augmented reality (AR) or virtual reality (VR) applications.
  • AR augmented reality
  • VR virtual reality
  • the processor may establish a Wi-Fi link with a neighbor device and use the increased bandwidth available through Wi-Fi for large data transfers.
  • the at least one processor may use the selected network connection or link and the established data transfer link to the neighbor device to communicate data associated with the detected software application. For example, the processor may offload a data processing or communication task from a primary network connection of the user computing device to a network connection of the neighbor device. As a further example, the processor may offload the high-definition video streaming data to a high-speed telecommunications link on a neighbor device due to its superior bandwidth and lower latency, while concurrently using the local Wi-Fi connection for less critical data tasks such as background updates or synchronizations.
  • the processor may update routing information of the user computing device to direct data traffic through the selected network connection or link.
  • the processor may be configured to dynamically route traffic so that the device bypasses its direct network connection or link in favor of a more optimal route through other neighboring devices. For example, instead of a laptop connecting to the internet directly through its base station, it may route its traffic through the TV, which may in turn connect to the internet through a private network. This indirect routing may be based on the assessment that the TV’s network connection or link offers a better fit for the laptop’s application needs at that moment.
  • the processor may be configured to use multiple network connections or links simultaneously and/or to combine the bandwidth of multiple network connection or links.
  • a laptop with access to multiple network connections or links e.g., Wi-Fi, Ethernet, etc.
  • the laptop may simultaneously use the Wi-Fi type of network connection of a tablet and the cellular network type of network connection of a smartphone to ensure robust and high-speed internet connectivity for a specific software application (e.g., a document download application, etc. ) .
  • the at least one processor may simultaneously use multiple selected network connections or links to communicate the data associated with the detected software application.
  • the processor may use a first type of network connection (e.g., Wi-Fi, etc. ) of a first neighbor device and a second type of network connection (e.g., telecommunications, etc. ) of a second neighbor device.
  • a first type of network connection e.g., Wi-Fi, etc.
  • a second type of network connection e.g., telecommunications, etc.
  • the processor may be configured to simultaneously use multiple different network connections (i.e., communication links) by dividing data traffic into different streams that may be routed through different network connections and paths to the internet.
  • the processor may be configured to allow for application-specific network stream selection. For example, a first application on the device may use a single stream for its data transfer and a second application on the device may use two or more streams for its data transfer. Each application may tailor its network usage based on its specific requirements.
  • a laptop computer 400 may include a at least one processor 402 coupled to volatile memory 404 and a large capacity nonvolatile memory, such as a disk drive 406 or Flash memory.
  • the laptop computer 400 may include a touchpad touch surface 408 that serves as the computer’s pointing device, and thus may receive drag, scroll, and flick gestures.
  • the laptop computer 400 may have one or more antenna 410 for sending and receiving electromagnetic radiation that may be connected to a wireless data link and/or cellular telephone transceiver 412 coupled to the at least one processor 402.
  • the computer 400 may also include a BT transceiver 414, a compact disc (CD) drive 416, a keyboard 418, and a display 420 all coupled to the at least one processor 402.
  • Other configurations of the computing device may include a computer mouse or trackball coupled to the at least one processor (e.g., via a universal serial bus (USB) input) as are well known, which may also be used in conjunction with various embodiments.
  • USB universal serial bus
  • FIG. 5 is a component block diagram of a computing device 500 suitable for use with various embodiments.
  • various embodiments may be implemented on a variety of computing devices 500, an example of which is illustrated in FIG. 5 in the form of a smartphone.
  • the computing device 500 may include a first SOC 102 coupled to a second SOC 104.
  • the first and second SoCs 102, 104 may be coupled to internal memory 516, a touch-sensitive display 512, and a speaker 514.
  • the first and second SOCs 102, 104 may also be coupled to at least one subscriber identity module (SIM) 540 and/or a SIM interface that may store information supporting a first 5GNR subscription and a second 5GNR subscription, which support service on a 5G non-standalone (NSA) network.
  • SIM subscriber identity module
  • NSA non-standalone
  • the computing device 500 may include an antenna 504 for sending and receiving electromagnetic radiation that may be connected to a wireless transceiver 166 coupled to one or more processors in the first and/or second SOCs 102, 104.
  • the computing device 500 may also include menu selection buttons or rocker switches 520 for receiving user inputs.
  • the computing device 500 also includes a sound encoding/decoding (CODEC) circuit 510, which digitizes sound received from a microphone into data packets suitable for wireless transmission and decodes received sound data packets to generate analog signals that are provided to the speaker to generate sound.
  • CODEC sound encoding/decoding
  • one or more of the processors in the first and second circuitries 102, 104, wireless transceiver 166 and CODEC 510 may include a digital signal processor (DSP) circuit (not shown separately) .
  • DSP digital signal processor
  • a server device 600 may include a processor 601 coupled to volatile memory 602 and a large capacity nonvolatile memory, such as a disk drive 603.
  • the server device 600 may also include a floppy disc drive, USB, etc. coupled to the processor 601.
  • the server device 600 may also include network access ports 606 coupled to the processor 601 for establishing data connections with a network connection circuit 604 and a communication network 608 (e.g., an Internet protocol (IP) network) coupled to other communication system network elements.
  • IP Internet protocol
  • the processors or processing units discussed in this application may be any programmable microprocessor, microcomputer, or multiple processor chip or chips that can be configured by software instructions (applications) to perform a variety of functions, including the functions of various embodiments described.
  • multiple processors may be provided, such as one processor within a first circuitry dedicated to wireless communication functions and one processor within a second circuitry dedicated to running other applications.
  • Software applications may be stored in the memory before they are accessed and loaded into the processor.
  • the processors may include internal memory sufficient to store the application software instructions.
  • Implementation examples are described in the following paragraphs. While some of the following implementation examples are described in terms of example methods, further example implementations may include: the example methods discussed in the following paragraphs implemented by a computing device including at least one processor coupled to memory and configured (e.g., with processor-executable instructions) to perform operations of the methods of the following implementation examples; the example methods discussed in the following paragraphs implemented by a computing device including means for performing functions of the methods of the following implementation examples; and the example methods discussed in the following paragraphs may be implemented as a non-transitory processor-readable storage medium having stored thereon processor-executable instructions configured to cause a processor of a computing device to perform the operations of the methods of the following implementation examples.
  • Example 1 A method of communicating information by a user computing device, the method including scanning a local network to receive network context information from one or more neighbor devices that include network connections or links to a wide area network (WAN) , using an artificial intelligence (AI) model to classify a software application detected on the user computing device based on one or more network condition parameters associated with the detected software application, analyzing a device network connection and at least one neighbor network connection of the one or more neighbor devices, selecting a network connection for the detected software application based on an output of the AI model and a result of analyzing the device network connection and the at least one neighbor network connection of the one or more neighbor devices, establishing a data transfer link to a neighbor device that includes the selected network connection, and using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application.
  • AI artificial intelligence
  • Example 2 The method of example 1, in which using the AI model to classify the software application detected on the user computing device based on the one or more network condition parameters associated with the detected software application includes using the AI model to classify the software application detected on the user computing device based on a quality of service (QOS) parameter of the detected software application.
  • QOS quality of service
  • Example 3 The method of either of examples 1 or 2, in which establishing the data transfer link to the neighbor device that includes the selected network connection includes one or more of establishing a universal serial bus (USB) link to the neighbor device, establishing an Ethernet link to the neighbor device, establishing a short-range wireless technology link to the neighbor device, establishing a wireless personal area network connection to the neighbor device, establishing a Wi-Fi link to the neighbor device, establishing a telecommunications link to the neighbor device, or establishing a device-to-device (D2D) link to the neighbor device.
  • USB universal serial bus
  • Ethernet link to the neighbor device
  • establishing a short-range wireless technology link to the neighbor device establishing a wireless personal area network connection to the neighbor device
  • establishing a Wi-Fi link to the neighbor device establishing a telecommunications link to the neighbor device
  • D2D device-to-device
  • Example 4 The method of any of examples 1-3, in which using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application includes using a mission-critical/ultra-low latency (MCX) link and a tethering link to the neighbor device to communicate the data associated with the detected software application.
  • MCX mission-critical/ultra-low latency
  • Example 5 The method of any of examples 1-4, in which using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application includes offloading a data processing or communication task from a primary network connection of the user computing device to a network connection of the neighbor device.
  • Example 6 The method of any of examples 1-5, in which using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application includes updating routing information of the user computing device to direct data traffic through the selected network connection.
  • Example 7 The method of any of examples 1-6, in which selecting the network connection for the detected software application based on the output of the AI model and the result of analyzing the device network connection and at the least one neighbor network connection of the one or more neighbor devices includes selecting the network connection in response to determining that there is an operational advantage to offloading data to a network connection of a neighboring device.
  • Example 8 The method of any of examples 1-7, in which selecting the network connection for the detected software application includes selecting multiple network connection, and using the selected network connection and the established data transfer link to the neighbor device to communicate data associated with the detected software application includes simultaneously using the selected multiple network connections to communicate the data associated with the detected software application.
  • Example 9 The method of example 8, in which simultaneously using the selected multiple network connections to communicate the data associated with the detected software application includes using a first type of network connection of a first neighbor device and a second type of network connection of a second neighbor device, in which the first type of network connection and the second type of network connection are different types of network connections.
  • Example 10 The method of example 9, in which the first type of network connection is a Wi-Fi network, and the second type of network connection is a telecommunications network.
  • ком ⁇ онент As used in this application, terminology such as “component, ” “module, ” “system, ” etc., is intended to encompass a computer-related entity. These entities may involve, among other possibilities, hardware, firmware, a blend of hardware and software, software alone, or software in an operational state.
  • a component may encompass a running process on a processor, the processor itself, an object, an executable file, a thread of execution, a program, or a computing device.
  • an application operating on a computing device and the computing device itself may be designated as a component.
  • a component might be situated within a single process or thread of execution or could be distributed across multiple processors or cores.
  • these components may operate based on various non-volatile computer-readable media that store diverse instructions and/or data structures. Communication between components may take place through local or remote processes, function, or procedure calls, electronic signaling, data packet exchanges, memory interactions, among other known methods of network, computer, processor, or process-related communications.
  • NVRAM non-volatile random-access memories
  • M-RAM Magnetoresistive RAM
  • ReRAM or RRAM resistive random access memory
  • PC-RAM phase-change random-access memory
  • F-RAM ferroelectric RAM
  • STT-MRAM spin-transfer torque magnetoresistive random-access memory
  • 3D-XPOINT three-dimensional cross point
  • Such memory technologies/types may also include non-volatile or read-only memory (ROM) technologies, such as programmable read-only memory (PROM) , field programmable read-only memory (FPROM) , one-time programmable non-volatile memory (OTP NVM) .
  • ROM non-volatile or read-only memory
  • PROM programmable read-only memory
  • FPROM field programmable read-only memory
  • OTP NVM one-time programmable non-volatile memory
  • Such memory technologies/types may further include volatile random-access memory (RAM) technologies, such as dynamic random-access memory (DRAM) , double data rate (DDR) synchronous dynamic random-access memory (DDR SDRAM) , static random-access memory (SRAM) , and pseudostatic random-access memory (PSRAM) .
  • DRAM dynamic random-access memory
  • DDR SDRAM double data rate synchronous dynamic random-access memory
  • SRAM static random-access memory
  • PSRAM pseudostatic random-access memory
  • Each of the above-mentioned memory technologies include, for example, elements suitable for storing instructions, programs, control signals, and/or data for use in a computing device, system on chip (SOC) or another electronic component.
  • SOC system on chip
  • Any references to terminology and/or technical details related to an individual type of memory, interface, standard or memory technology are for illustrative purposes only, and not intended to limit the scope of the claims to a particular memory system or technology unless specifically recited in the claim language.
  • DSP digital signal processor
  • TCUASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor, but, in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Alternatively, some operations or methods may be performed by circuitry that is specific to a given function.
  • the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable medium or non-transitory processor-readable medium.
  • the operations of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a non-transitory computer-readable or processor-readable storage medium.
  • Non-transitory computer-readable or processor-readable storage media may be any storage media that may be accessed by a computer or a processor.
  • non-transitory computer-readable or processor-readable media may include RAM, ROM, EEPROM, FLASH memory, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that may be used to store target program code in the form of instructions or data structures and that may be accessed by a computer.
  • Disk and disc includes compact disc (CD) , laser disc, optical disc, DVD, floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of non-transitory computer-readable and processor-readable media.
  • the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Divers modes de réalisation comprennent des systèmes et des procédés d'utilisation d'une sélection de connexion de réseau intelligente pour améliorer les performances et le fonctionnement d'un réseau de communication et de ses composants. Un dispositif informatique utilisateur peut balayer (par exemple, utiliser (I)) des dispositifs voisins à proximité pour déterminer des informations de contexte de réseau de dispositifs voisins, recevoir et stocker des informations de contexte de réseau des dispositifs voisins, détecter le lancement d'une application sur le dispositif informatique, utiliser un modèle d'intelligence artificielle (IA) pour classer l'application détectée en fonction de ses exigences de réseau ou de service, analyser les connexions de réseau disponibles à la fois du dispositif informatique et des dispositifs voisins, sélectionner une connexion ou une liaison de réseau pour l'application détectée, en fonction des connexions ou des liaisons de réseau analysées et des exigences de réseau ou de service de l'application, établir une connexion de données par l'intermédiaire de la connexion de réseau sélectionnée (qui peut se faire par l'intermédiaire d'un dispositif voisin), et transférer des données d'application sur la connexion de données établie.
PCT/CN2023/139114 2023-12-15 2023-12-15 Extension de liaison de données d'application et sélection intelligente Pending WO2025123323A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/139114 WO2025123323A1 (fr) 2023-12-15 2023-12-15 Extension de liaison de données d'application et sélection intelligente

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/139114 WO2025123323A1 (fr) 2023-12-15 2023-12-15 Extension de liaison de données d'application et sélection intelligente

Publications (1)

Publication Number Publication Date
WO2025123323A1 true WO2025123323A1 (fr) 2025-06-19

Family

ID=89509143

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/139114 Pending WO2025123323A1 (fr) 2023-12-15 2023-12-15 Extension de liaison de données d'application et sélection intelligente

Country Status (1)

Country Link
WO (1) WO2025123323A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190132219A1 (en) * 2017-10-26 2019-05-02 Microsoft Technology Licensing, Llc Intelligent connection management for multiple interfaces
WO2022237721A1 (fr) * 2021-05-14 2022-11-17 华为技术有限公司 Procédé de réseautage et appareil et système appropriés

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190132219A1 (en) * 2017-10-26 2019-05-02 Microsoft Technology Licensing, Llc Intelligent connection management for multiple interfaces
WO2022237721A1 (fr) * 2021-05-14 2022-11-17 华为技术有限公司 Procédé de réseautage et appareil et système appropriés
US20240121840A1 (en) * 2021-05-14 2024-04-11 Huawei Techologies Co., Ltd. Network connection method, related apparatus, and system

Similar Documents

Publication Publication Date Title
US12488240B2 (en) Multi-domain joint semantic frame parsing
JP7653419B2 (ja) チャットボットシステムにおける無関係な発話の検出
US10803392B1 (en) Deploying machine learning-based models
JP2022511613A (ja) 量子コンピューティング・ジョブの共同スケジューリング
WO2018102240A1 (fr) Modèle conjoint de compréhension de langage et de gestion de dialogues
WO2025112801A1 (fr) Procédé d'entraînement de modèle d'apprentissage profond et système d'entraînement de modèle d'apprentissage profond
WO2022171066A1 (fr) Procédé et appareil d'attribution de tâche sur la base d'un dispositif de l'internet des objets, ainsi que procédé et appareil d'apprentissage de réseau
CN111368973B (zh) 用于训练超网络的方法和装置
CN111340220B (zh) 用于训练预测模型的方法和装置
CN117313830A (zh) 基于知识蒸馏的模型训练方法、装置、设备及介质
US20190074013A1 (en) Method, device and system to facilitate communication between voice assistants
CN118673334B (zh) 一种训练样本的生成方法、装置、电子设备及存储介质
Zhang et al. Mobile generative ai: Opportunities and challenges
Sham et al. Intelligent admission control manager for fog‐integrated cloud: A hybrid machine learning approach
Liu et al. LAMeTA: Intent-Aware Agentic Network Optimization via a Large AI Model-Empowered Two-Stage Approach
WO2025123323A1 (fr) Extension de liaison de données d'application et sélection intelligente
Sudharsan et al. Globe2train: A framework for distributed ml model training using iot devices across the globe
US20220011852A1 (en) Methods and apparatus to align network traffic to improve power consumption
US20250094792A1 (en) Task execution method for large model, device, and medium
KR20220064338A (ko) 가속기 연산 스케줄링의 경량화 및 병렬화 방법 및 장치
Niu et al. Collaborative Learning of On-Device Small Model and Cloud-Based Large Model: Advances and Future Directions
Wang et al. EIS: Edge Information-Aware Scheduler for Containerized IoT Applications
Bengre et al. A learning-based scheduler for high volume processing in data warehouse using graph neural networks
CN117348986A (zh) 任务处理方法、问答方法以及任务处理平台
WO2025145413A1 (fr) Systèmes, appareils, procédés et dispositifs de stockage non transitoires lisibles par ordinateur pour améliorer et distribuer des modèles de génération de jeux de données synthétiques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23837138

Country of ref document: EP

Kind code of ref document: A1