[go: up one dir, main page]

WO2025227423A1 - Interaction de fonction de réseau améliorée dans des réseaux de télécommunications - Google Patents

Interaction de fonction de réseau améliorée dans des réseaux de télécommunications

Info

Publication number
WO2025227423A1
WO2025227423A1 PCT/CN2024/091088 CN2024091088W WO2025227423A1 WO 2025227423 A1 WO2025227423 A1 WO 2025227423A1 CN 2024091088 W CN2024091088 W CN 2024091088W WO 2025227423 A1 WO2025227423 A1 WO 2025227423A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing entity
entity
request
specific
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/091088
Other languages
English (en)
Inventor
Feng Liu
Meimei Wang
Mang Li
Jiarui PAN
Kaihan HU
Yafei Li
Konstantinos Vandikas
Mengmeng Liu
Canqin JIAN
Xiaobo Wang
Jiawei Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Priority to PCT/CN2024/091088 priority Critical patent/WO2025227423A1/fr
Publication of WO2025227423A1 publication Critical patent/WO2025227423A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/16Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence

Definitions

  • the present disclosure relates to telecommunications networks.
  • the present disclosure relates to various aspects of how to respond to requests made to one or more Network Functions (NFs) of a telecommunications network.
  • NFs Network Functions
  • 5G Fifth-Generation
  • 5G core network includes a plurality of interconnected so-called Network Functions (NFs) that each provide one or more services accessible by other NFs of the network.
  • NFs Network Functions
  • APIs Application Programming Interfaces
  • NWDAF The Network Data Analytics Function
  • ETSI TS 123.288 V16.4.0 The Network Data Analytics Function
  • ETSI TS 123.288 V16.4.0 is one example of a contemporary NF that is API-driven, and in which specific APIs are provided to handle specific requests, and wherein e.g. new or updated APIs are required to handle one or more requests that has not already been specified and standardized.
  • the present disclosure provides various computer-implemented methods related to identifying one or more responses to a request in a telecommunications network, as well as various corresponding NF or other entities, a telecommunications network, computer programs and computer program products as defined in and by the accompanying independent claims.
  • Various embodiments of the methods, NF or other entities, telecommunications network, computer programs and computer program products are defined in and by the accompanying dependent claims.
  • a computer-implemented method of identifying one or more responses to a request is performed as part of at least one NF of a telecommunications network.
  • the method includes obtaining a request originating from a requesting NF of the telecommunications network.
  • the method includes obtaining a plurality of candidate responses.
  • the method includes calculating, by performing regression based on the obtained request and candidate responses, elements of a coefficient vector subject to a constraint on the elements.
  • the method further includes identifying, out of the plurality of candidate responses, one or more responses pertinent to the request based on one or more of the calculated elements of the coefficient vector.
  • the present disclosure improves upon contemporary technology in that it enables at least one NF capable of receiving and responding to different types of requests in a more flexible way.
  • the envisaged method enables the support of more versatile requests/queries without the need to change/update multiple interfaces, but instead maintaining only a single API.
  • the use of regression with the elements of the coefficient vector subjected to a constraint allows, as will be described in more detail later herein, to overcome the problem of how to decide the number of candidate responses that should be returned/used to formulate an answer, which may be difficult as the optimal such number may often change based on the type of request/query.
  • the number of returned/used candidate responses is inferred instead of being explicitly stated.
  • the regression may include regularization.
  • regularization in the context of regression means that a penalty term is added to e.g. a cost function of a regression model in order to reduce the magnitude of the elements of the coefficient vector, i.e. the magnitudes of the coefficients. This may prevent the model from fitting too closely to e.g. training data and to reduce the variance of the model, which may improve the methods capability of returning/using the candidate responses that are most relevant to the received request/query.
  • the regularization may include adding (to e.g. a cost function of the regression model) the elements of the coefficient vector as an L1-norm penalty term. Using such a penalty term may cause the regression model to select a subset of all possible candidate responses by setting the rest of the elements of the coefficient vector to zero, in contrast to e.g. L2-norm regularization wherein all elements are often shrunk towards zero but still remains finite.
  • the regression may include Least Absolute Shrinkage and Selection Operator (LASSO) regression, which may provide L1-norm regularization for a linear regression model and result in variable selection in which only a subset of all possible candidate responses are considered pertinent to the request, thereby resulting in an improved response that is more likely to be considered useful and relevant. Phrased differently, such regression may remove candidate responses that are redundant or irrelevant to the request without incurring much or any loss of information.
  • LASSO Least Absolute Shrinkage and Selection Operator
  • performing the regression may include using an Alternating Direction Method of Multipliers (ADMM) , or any derivative thereof.
  • ADMM may be useful when solving a convex optimization problem, as it may enable to break such a problem down into smaller pieces/subproblems, each of which may be easier to handle/solve than the full problem.
  • the request may include at least one query feature vector corresponding to an embedding of query data.
  • the candidate responses may include candidate feature vectors corresponding to embeddings of respective candidate data.
  • the regression may be performed using the at least one query feature vector as dependent variable and the candidate feature vectors as independent variables.
  • the query feature vector may be provided using a same type of embedding as used to provide the candidate feature vectors.
  • the candidate feature vectors may for example be provided by embedding different chunks of text, such as text of a knowledge base, and the query feature vector may be provided for example by embedding a (natural) language query provided by a consumer of the NF.
  • the constraint may require all of the elements of the coefficient vector to be non-negative. This may approximate a similar vector searching problem as a non-negative, sparse reconstruction of the query feature vector.
  • finding the one or more responses pertinent to request may be performed by solving a LASSO (-type) regression problem with an additional constraint that the variable to be solved (i.e. the coefficient matrix) should be non-negative, and the positive entries of the coefficient vector may indicate which candidate responses (i.e. candidate feature vectors) that should be considered pertinent to the request (i.e. query feature vector) .
  • different subsets of the candidate responses may be stored in different storages accessible by (different) computing entities of the telecommunications network, and the regression may include using distributed optimization. As will be described in more detail later herein, this may thus allow to find pertinent responses from a plurality of different sources and across different domains, without the need to share the actual knowledge base data across such different sources and/or domains, such as between different entities of the telecommunications network.
  • the distributed optimization may include: -separating an optimization problem of the regression into a plurality of subproblems each individually solvable by a respective computing entity; -each computing entity finding, by solving its respective subproblem, computing entity-specific elements of computing entity-specific coefficient vectors; and finding the elements of the coefficient vector by gathering the computing entity-specific elements of the respective computing entity-specific coefficient vector found by each computing entity.
  • each computing entity may for example be provided by a computing node, and finding the elements of the “overall” coefficient vector may include gathering the computing entity-specific elements of the computing entity-specific coefficient vectors found by each computing entity solving its respective computing entity-specific subproblem.
  • each computing entity solving its respective subproblem may include the use of a computing entity-specific scaled dual variable.
  • the distributed optimization may further include using a coordinating entity for calculating one or more parameter values required by the (other) computing entities to solve their individual subproblems, and for distributing the one or more parameter values to the respective computing entity.
  • a coordinating entity for calculating one or more parameter values required by the (other) computing entities to solve their individual subproblems, and for distributing the one or more parameter values to the respective computing entity.
  • there may thus be multiple categories of entities, including e.g. a coordinating entity and a plurality of computing entities, such as for example provided by a coordinating node and a plurality of computing nodes, respectively.
  • calculating the one or more parameter values using the coordinating entity may include the coordinating entity solving a non-constrained optimization problem.
  • the one or more parameter values calculated by the coordinating entity may include one or more computing entity-specific slack parameter values.
  • the NF may be configured to provide a Large Language Model (LLM) -driven question and answer (QA) service.
  • LLM Large Language Model
  • the request may be defined based on a natural language query, and the plurality of candidate responses may be defined based on a plurality of chunks of natural language text.
  • the LLM-driven QA service may be based on Retrieval Augmented Generation (RAG) .
  • RAG Retrieval Augmented Generation
  • the NF may be configured to provide a classification service.
  • the request may be defined based on an entity to be predicted, and the plurality of candidate responses may be defined based on a plurality of candidate entities.
  • the NF may be configured to provide a log analytics service.
  • the request may be defined based on a log to be predicted, and the plurality of candidate responses may be defined based on a plurality of historical logs.
  • the plurality of candidate responses may be stored using on or more vector databases.
  • the plurality of candidate responses may be provided as a plurality of candidate feature vectors embedding respective candidate data (such as different chunks of text) , and may be stored in the one or more vector databases.
  • Vector databases may be particularly suitable for storing such vectors.
  • the method may further include detecting that the request is an outlier to the plurality of candidate responses and, in response thereto, returning an indication that a response to the request is uncertain or not known. For example, based on the elements of the coefficient vector, it may be determined that there are no candidate responses that are particularly pertinent to the request. Instead of (or in addition to) then providing one or more less-or non-pertinent responses as an answer to the request, the request may include an indication that whatever answer that is provided is likely not a very good answer, and/or the answer itself may be that the NF “does not know” a good answer to the question/query of the request.
  • the method may further include returning a response to the obtained request to the requesting NF based on at least one of the identified one or more responses pertinent to the request.
  • the NF may be or form part of a Network Data Analytics Function, NWDAF or NWDAF framework.
  • NWDAF Network Data Analytics Function
  • NWDAF framework a Network Data Analytics Function
  • a coordinating entity e.g. to what has here been referred to as a coordinating entity, a computing entity, and similar, which may all form part of (and/or be used to implement) such an NWDAF or NWDAF framework.
  • each storage and/or computing entity may implement at least part of an Analytical Data Repository Function (ADRF) .
  • ADRF Analytical Data Repository Function
  • the coordinating entity may implement at least part of a Data Collection Co-ordination (& Delivery) Function (DCCF) .
  • DCCF Data Collection Co-ordination Function
  • a computer-implemented method of assisting in identifying one or more responses to a request is performed as part of a coordinating entity of a telecommunications network.
  • the method includes: -obtaining a request originating from a requesting NF of the telecommunications network; -for each of a plurality of computing entities of the telecommunications network each having access to different subsets of a plurality of candidate responses, and configured to solve individual regression subproblems to find computing entity-specific elements of computing entity-specific coefficient vectors: -calculating one or more parameter values required by the respective computing entity to solve its computing entity-specific individual subproblem; -gathering the computing entity-specific elements of the respective computing entity-specific coefficient vector found by the respective computing entity; and identifying, out of the plurality of candidate responses, one or more responses pertinent to the request based on the gathered computing entity-specific elements (as gathered from the plurality of computing entities) .
  • Calculating the one or more parameter values required by the computing entities may of course also include to distribute the calculated one or more parameter values to the computing entities.
  • the computing entity-specific (regression) subproblems may include the constraint that the computing entity-specific elements are non-negative.
  • calculating the one or more parameter values required by the computing entities may include the coordinating entity solving a non-constrained optimization problem.
  • the one or more parameter values required by the computing entities may include one or more computing entity-specific slack parameter values.
  • a computer-implemented method of assisting in identifying one or more responses to a request is performed by a computing entity of a telecommunications network.
  • the method includes: -obtaining, from a coordinating entity of the telecommunications network, one or more parameter values required to solve regression subproblem specific for the computing entity, wherein the one or more parameter values are determined (e.g.
  • the coordinating entity calculates () by the coordinating entity based on a request originating from a requesting NF of the telecommunications network; -finding, by solving the regression subproblem specific for the computing entity based on the obtained one or more parameter values and a subset of a plurality of candidate responses accessible by the computing entity, computing entity-specific elements of a computer entity-specific coefficient vector subject to a constraint on the computing entity-specific elements; and providing the found computing entity-specific elements of the computing entity-specific coefficient vector to the coordinating entity.
  • the constraint may include that the computing entity-specific elements are non-negative.
  • solving the regressions subproblem specific for the computing entity may include the use of a computing entity-specific scaled dual variable.
  • the one or more parameter values for each computing entity may include a computing entity-specific slack parameter value (obtained from the coordinating entity) , and using the computing entity-specific slack parameter value as part of solving the computing entity-specific regression subproblem.
  • a Network Function (NF) entity for a telecommunications network.
  • the NF entity includes processing circuitry and a memory storing instructions, wherein the instructions are such that they, when executed by the processing circuitry, cause the NF entity to: -obtain a request originating from a requesting NF of the telecommunications network; -obtain a plurality of candidate responses; -calculate, by performing regression based on the obtained request and candidate responses, elements of a coefficient vector subject to a constraint on the elements; and identify, out of the plurality of candidate responses, one or more responses pertinent to the request based on one or more of the calculated elements of the coefficient vector.
  • the NF entity may thus be configured to perform the method of the first aspect (or any embodiment thereof disclosed/discussed herein) .
  • a coordinating entity for a telecommunications network includes processing circuitry and a memory storing instructions, wherein the instructions are such that they, when executed by the processing circuitry, cause the coordinating entity to: -obtain a request originating from a requesting NF of the telecommunications network; -for each of a plurality of computing entities of the telecommunications network each having access to different subsets of a plurality of candidate responses, and configured to solve individual regression subproblems to find computing entity-specific elements of computing entity-specific coefficient vectors: -calculate one or more parameter values required by the respective computing entity to solve its computing entity-specific individual subproblem; distribute the one or more calculated parameter values to the respective computing entity, and -gather the computing entity-specific elements of the respective computing entity-specific coefficient vector found by the respective computing entity; and identify, out of the plurality of candidate responses, one or more responses pertinent to the request based on the gathered computing entity-specific elements (
  • a computing entity for a telecommunications network.
  • the computing entity includes processing circuitry and a memory storing instructions, wherein the instructions are such that they, when executed by the processing circuitry, cause the computing entity to: -obtain, from a coordinating entity of the telecommunications network, one or more parameter values required by the computing entity to solve a regression subproblem specific for the computing entity, wherein the one or more parameter values are determined (e.g.
  • the computing entity may thus be configured to perform the method of the third aspect (or any embodiment thereof disclosed/discussed herein) .
  • the telecommunications network includes i) the Network Function (NF) entity of the fourth aspect, and/or ii) the coordinating entity of the fifth aspect and a plurality of the computing entity of the sixth aspect.
  • NF Network Function
  • a computer program including instructions that, when executed by processing circuitry of a Network Function (NF) entity, cause the NF entity to: -obtain a request originating from a requesting NF of the telecommunications network; -obtain a plurality of candidate responses; -calculate, by performing regression based on the obtained request and candidate responses, elements of a coefficient vector subject to a constraint on the elements; and identify, out of the plurality of candidate responses, one or more responses pertinent to the request based on one or more of the calculated elements of the coefficient vector.
  • the computer program is thus configured to cause the NF entity to perform the method of the first aspect (or any embodiment thereof disclosed/discussed herein) .
  • a computer program including instructions that, when executed by processing circuitry of a coordinating entity, cause the coordinating entity to: -obtain a request originating from a requesting NF of the telecommunications network; -for each of a plurality of computing entities of the telecommunications network each having access to different subsets of a plurality of candidate responses, and configured to solve individual regression subproblems to find computing entity-specific elements of computing entity-specific coefficient vectors: -calculate one or more parameter values required by the respective computing entity to solve its computing entity-specific individual subproblem; distribute the one or more calculated parameter values to the respective computing entity, and -gather the computing entity-specific elements of the respective computing entity-specific coefficient vector found by the respective computing entity; and identify, out of the plurality of candidate responses, one or more responses pertinent to the request based on the gathered computing entity-specific elements (as gathered from the plurality of computing entities) .
  • the computer program is thus configured to cause the coordinating entity
  • a computer program including instructions that, when executed by processing circuitry of a computing entity, cause the computing entity to: -obtain, from a coordinating entity of the telecommunications network, one or more parameter values required by the computing entity to solve a regression subproblem specific for the computing entity, wherein the one or more parameter values are determined (e.g.
  • the computer program is thus configured to cause the computing entity to perform the method of the third aspect (or any embodiment thereof disclosed/discussed herein) .
  • a computer program product including a computer-readable storage medium storing the computer program of the eighth aspect.
  • a computer program product including a computer-readable storage medium storing the computer program of the ninth aspect.
  • a computer program product including a computer-readable storage medium storing the computer program of the tenth aspect.
  • Figure 1 schematically illustrates various examples of identifying one or more responses to a request to an NF according to the present disclosure
  • Figure 2 schematically illustrates various examples of identifying one or more responses to a request to an NF in a distributed setup according to the present disclosure
  • Figure 3 schematically illustrates a flowchart of various examples of a method of identifying one or more responses to a request to an NF according to the present disclosure
  • Figure 4 schematically illustrates a flowchart of various examples of a method of assisting in identifying one or more responses to a request to an NF as performed in a coordinating entity according to the present disclosure
  • Figure 5 schematically illustrates a flowchart of various examples of a method of assisting in identifying one or more responses to a request to an NF as performed in a computing entity according to the present disclosure
  • Figure 6 schematically illustrates a signaling scheme of various example communications between a coordinating entity and one or more computing entities according to the present disclosure
  • FIGS. 7A and 7B schematically illustrate example NF entities according to the present disclosure
  • FIGS. 8A and 8B schematically illustrate example coordinating entities according to the present disclosure
  • FIGS 9A and 9B schematically illustrate example computing entities according to the present disclosure
  • Figure 10 schematically illustrates example computer program products, computer programs and computer-readable storage media according to the present disclosure
  • Figure 11 schematically illustrates various example telecommunications networks according to the present disclosure
  • FIGS 12A, 12B, 12C and 12D schematically illustrate various example use cases according to the present disclosure.
  • Figure 13 schematically illustrates an example NWDAF or NWDAF framework according to the present disclosure.
  • an NF and one or more entities implementing such an NF
  • a pretrained Large Language Model LLM
  • QA question & answering
  • an NF may be implemented using a single (network) entity/device, or be implemented using a plurality of (network) entities/devices.
  • an “entity” may be a physical entity or a logical entity.
  • a physical entity may be a physical computer, server, or similar, while a logical entity may correspond to e.g. an instance of a virtual machine running on a server, or e.g. to an entity implemented in software only where multiple such entities are then realized on same physical hardware.
  • an NF may be implemented by a single such entity, or formed/implemented by multiple such entities.
  • text found in a so-called knowledge base is divided into multiple text chunks, where a text chunk may for example contain a limited number of words (such as a sentence, paragraph, or similar) .
  • a text chunk may be limited to e.g. 1000 words or similar.
  • An embedding model is then used to convert each text chunk into a feature vector such that semantic meaning may be captured in an embedded vector space.
  • the same embedding model can be used to embed also the query as a feature vector, and the query feature vector is then compared with the feature vectors formed based on the KB in order to identify if there are such KB feature vectors that similar to the query feature vector.
  • Similar vector searching After one or more KB feature vectors are identified as similar to the query feature vector, these one or more KB feature vectors are then used to form a response (i.e. answer) to the request (query) from the NF consumer.
  • the functionality of similar vector searching may be implemented and managed by dedicated tools such as vector databases (or vector stores) , where contemporary examples include Chroma DB and Facebook AI Similarity Search (FAISS) , and others.
  • Searching for similar vectors may include returning a list of database objects (i.e. KB feature vectors) that are nearest to the query feature vector in terms of e.g. Euclidian space, and/or e.g. to return a list of database objects that have a largest dot product with the query feature vector.
  • database objects i.e. KB feature vectors
  • Euclidian space e.g. Euclidian space
  • Searching for similar vectors may include returning a list of database objects (i.e. KB feature vectors) that are nearest to the query feature vector in terms of e.g. Euclidian space, and/or e.g. to return a list of database objects that have a largest dot product with the query feature vector.
  • a challenge arises how to efficiently perform such searching on a large scale, including on e.g. billions of different KB feature vectors.
  • Similar vector searching may be formulated as a k Nearest Neighbor (kNN) problem, wherein the task is to find the k nearest neighbors among the KB feature vectors to the query feature vector, in some vector space to which the various feature vectors belong.
  • kNN k Nearest Neighbor
  • the number of feature vectors that is returned i.e. the size of the integer k
  • the number of feature vectors that is returned is often fixed, which may give rise to multiple problems. For example, if k is selected too small, there is a risk that not all text chunks of the KB that contain information pertinent to the query will be returned. As a result, the information provided from the LLM to answer the query may thus be incomplete, and the overall answer quality is thus likely reduced.
  • the value of k may also have further implications, for example on the indexing system of the database as the latter will try to optimize how e.g. documents or other data items are indexed such that the first k results are obtained first. If the value of k changes, the underlying index may thus also require to be updated, which may consume additional computational resources as re-indexing of the system may not be trivial.
  • the use of a fixed value for k may lead to sub-optimal performance, and in particular where the incoming queries cannot be expected to always be similar.
  • a predefined k may be insufficient in case there are multiple KBs that are distributed across multiple domains/storages, and wherein e.g. each KB contain different datasets and require its own optimal value of k.
  • distributed KBs may be external to e.g. a processor responsible for handling a query and for privacy reason, it may be the case that k needs to be determined without knowing e.g. the proper context of each KB, which may further exacerbate the above-described situation.
  • the present disclosure proposes a scalable, adaptive similar vector searching methodology which does not rely on explicitly estimating/stating k, but which instead relies on inference and solving of a convex optimization problem. This allows, for a specific query, the number k of similar vectors to be determined adaptively. As will also be discussed in more detail later herein, the proposed solution of the present disclosure also addresses the problem where (private) KBs are located in different places, and wherein exchange of sensitive data across different domains is not desirable.
  • Figure 1 schematically illustrates examples of an NF 110 (entity) in a first scenario 100 as envisaged herein
  • Figure 3 schematically illustrates a flowchart of examples of a method 300 performed by such an NF 110 in order to identify one or more responses to a request.
  • the NF 110 receives (as part of e.g. an operation S310 of the method 300) a request 120 originating from a requesting NF 122 of a telecommunications network, wherein the requesting NF 122 can be different from the NF 110.
  • the request 120 may for example include a natural language query, or similar.
  • the request 120 may be embedded (using some suitable embedding model) to form a query feature vector y.
  • the NF 110 further obtains (as part of e.g. an operation S320 of the method 300) a plurality of candidate responses 130, for example from one or more KBs 132.
  • the candidate responses 130 may be embedded (using some suitable embedding model, such as the same model used to obtain the query feature vector y) to form a set/plurality of candidate feature vectors ⁇ x j ⁇ , where j is an integer denoting the j: th such candidate response/feature vector.
  • the NF 110 is configured to identify which of the plurality of candidate responses 130 that are pertinent e.g. relevant to the request 120 by performing regression, i.e. by solving a regression problem 140 based on the obtained request 120 and candidate responses 130.
  • this includes calculating elements of a coefficient vector ⁇ subject to a constraint on the elements ⁇ ′ j of ⁇ .
  • the regression problem may include to minimize a function f ( ⁇ ) subject to some constraint on ⁇ .
  • the regression problem may be a convex optimization problem.
  • the regression problem 140 solved by the NF 110 may be stated as
  • LASSO Least Absolut Shrinkage and Selection Operator
  • the constraint on the elements of the coefficient vector ⁇ may include that all elements are non-negative, i.e. that ⁇ 0, such that the regression problem to be solved by the NF 110 is
  • the positive entries ⁇ ′ j of ⁇ may thus correspond to the candidate feature vectors ⁇ x j ⁇ (i.e. candidate responses) that are to be identified as similar/pertinent to the query feature vector y (i.e. request) .
  • ⁇ x j ⁇ i.e. candidate responses
  • y i.e. request
  • an ordinary LASSO-type problem does not include the constraint that ⁇ 0, but that this (or such a) constraint is proposed in the present disclosure.
  • the present disclosure proposes to perform similar vector searching based on a non-negative sparse reconstruction of the query feature vector y, wherein both variable selection and regularization are performed in order to identify only the candidate feature vectors that are pertinent to the query, which is often the case as the KB 132 may often include lots of additional information that is not relevant for a particular query.
  • the request 120 may thus in some examples include at least one query feature vector y that corresponds to an embedding (using some suitable embedding model) of query data, such as embedding of a natural language query (such as a question written in natural language) .
  • the candidate responses 130 may include candidate feature vectors ⁇ x j ⁇ corresponding to embeddings of respective candidate data, wherein such candidate data is e.g. text chunks of the information found in the KB 132, such as sentences, paragraphs, single words, or similar.
  • the at least one query feature vector y is thus used as a dependent variable, while the candidate feature vectors ⁇ x j ⁇ are used as independent variables.
  • the NF 110 is further configured to identify (as part of e.g. an operation S340 of the method 300) one or more responses 152 out of the plurality of candidate responses 130.
  • the identified one or more responses 152 may for example be passed downstream and used by other functionality of the NF 110 and/or by some other NF of the telecommunications system.
  • the NF 110 is further configured to identify (as part of e.g. an operation S340 of the method 300) one or more responses 152 out of the plurality of candidate responses 130.
  • the identified one or more responses 152 may for example be passed downstream and used by other functionality of the NF 110 and/or by some other NF of the telecommunications system.
  • the NF 110 may further return a response /answer 154 (to the obtained request 120) to the requesting NF 122, based on at least one of the identified one or more responses 152 identified as being pertinent to the request 120.
  • the NF 110 may be further configured to detect whether the request 120 (as represented by the query feature vector y) is an outlier to the candidate responses (as represented by the plurality of candidate feature vectors ⁇ x j ⁇ , i.e. whether there are no candidate feature vectors ⁇ x j ⁇ that are considered pertinent to the request/query feature vector x.
  • the NF 110 may be configured to include an indication in the response 154 that the response 154 (i.e. the answer to the query 120 from the requesting NF 122) is uncertain or not known.
  • the NF 110 may avoid using and/or returning candidate responses that are not sufficiently nearby the query feature vector x, and may thus provide an improved QA functionality in that it may argue that it does not know a good answer instead of returning an answer that is not well founded.
  • the prompt may include a statement such as “indicate that you don’t know the answer if the returned list of nearby candidate feature vectors is empty” , or similar, after which the NF 110 may return an indication that “I do not know the answer to the particular question” as part of, or as, the response 154, or similar.
  • outlier detection may be based on the idea that a distribution of outputs from the regression problem is different for inlier query feature vectors and outlier query feature vectors, wherein e.g. the distribution for inlier query feature vectors is more concentrated while the distribution for outlier query feature vectors is flatter.
  • contemporary outlier detection algorithms such as e.g. One-Class Support Vector Machine (SVM) , Local Outlier Factor (LOF) , Isolation Forest, and similar, where the solved coefficient vector ⁇ of different query feature vectors (as input) can be used as input to the outlier detection algorithm.
  • the output 152 may for example be a list, and if the feature query vector x (i.e.
  • the list may e.g. include one or more text chunks deemed similar to the request 120, and may be passed downstream to e.g. answer-generation based on LLM or similar) .
  • the list may be empty and/or include an indication that a good answer to the query feature vector (request 120) was not found within the KB (s) 132.
  • Figure 2 schematically illustrates examples of an NF 210 (entity) and multiple additional entities 230-1, 230-2, ..., 230-i, ... (herein also jointly referred to as entities 230) in a distributed scenario 200 as envisaged herein
  • Figures 4 and 5 schematically illustrate flowcharts of examples of methods 400 and 500 performed by NF 210 and entities 230, respectively as part of assisting in identifying one or more responses to a request.
  • KBs 132-1, 132-2, ..., 132-i, ... (herein also jointly referred to as KBs 132) that are each accessible only by a corresponding entities 230, such that each of the entities 230 may obtain its own subset 130-1, 130-2, ..., 130-i, ...(herein also jointly referred to as subsets 130) of a plurality of candidate responses/feature vectors stored distributed among the KBs 132.
  • the NF 210 may be referred to as a coordinating entity, while each of the entities 230 may be referred to as a corresponding computing entity.
  • each of the entities 230 may be referred to as a corresponding computing entity.
  • i th such computing entity 230-i will be focused on together with the coordinating entity 210, but the principles discussed with reference to the computing entity 230-i apply similarly also to all other computing entities 230.
  • the coordinating entity 210 receives (as part of e.g. an operation S410 of the method 400) the request 120 originating from the requesting NF 122 (e.g. in form of the query feature vector y) .
  • the coordinating entity 210 does not necessarily have direct access to any KBs 132, following from the fact that the KBs 132 are distributed.
  • identification of one or more responses to the request 120 is done using distributed optimization. On a general level, this may be performed also as part of the method 300, e.g.
  • the NF (entity) 110 is the coordinating entity 210 and/or represents an NF framework including several NFs (entities) .
  • the method 300 described with reference to Figure 3 may be performed either by a single entity, or be jointly performed by a plurality of entities such as the coordinating entity 210 and the plurality of computing entities 230. In the latter case, the operation S332 of distributed optimization of the method 300 may thus be performed jointly by the coordinating entity 210 and the plurality of computing entities 230, or similar.
  • the method 300 may include calculating and distributing (to each computing entity 230, as part of e.g. an operation S335 of the method 300) one or more computing entity-specific parameter values that are required by the computing entity 230-i for solving an individual regression subproblem.
  • the coordinating entity 210 may perform this as part of an operation S430 of the method 400, and the computing entity 230-i may receive such parameters from the coordinating entity 210 as part of an operation S520 of the method 500.
  • the method 300 may further include (as part of e.g. an operation 336) each computing entity 230 solving the computing entity -specific regression subproblem, i.e. it is envisaged that the method 300 may include separating an optimization problem of the regression into a plurality of subproblems 240-1, 240-2, ..., 240-i, ... (herein also jointly referred to as subproblems 240) that are each individually solvable by the corresponding/respective computing entity 230.
  • the computing entity 230-i may be configured to solve its own subproblem 240-i, part of e.g. an operation S530 of the method 500.
  • Solving of the subproblem 240-i may include the computing entity 230-i finding computing entity -specific elements of a computing entity-specific coefficient vector ⁇ i subject to a constraint on these elements, based on e.g. the obtained one or more parameters and the subset 132-i of candidate responses available/accessible to/by the computing entity 230-i.
  • the method 300 may include to gather (as part of e.g. an operation S337) the calculated/found computing entity-specific elements/coefficient vectors from the computing entities 230.
  • This may include the computing entity 230-i providing its computing entity-specific coefficient vector ⁇ i to the coordinating entity 210 (as part of e.g. an operation S540 of the method 500) , and the coordinating entity 210 gathering these received elements as part of e.g. an operation S450 of the method 400, to form e.g. the full coefficient vector ⁇ .
  • each computing entity 230 may for example send only the elements of its computing entity-specific coefficient vector ⁇ i that are above a certain threshold, or similar, in order to e.g. reduce the amount of information needed to be transmitted between the coordinating entity 210 and the computing entities 230.
  • the coordinating entity 210 may then, as part of e.g. an operation S460 of the method 400 (e.g. as part of the operation S340 of the method 300) , identify the one or more responses pertinent to the request/query 120 based on the gathered computing entity-specific coefficient vector elements. Once these one or more responses have been identified, the coordinating entity 210 may optionally return the answer 154 to the requesting NF 122 as described earlier herein, including e.g. the detection of outliers as also described earlier herein. The coordinating entity 210 may for example communicate with e.g. the computing entities 230 and ask them to send the relevant parts of their respective KBs 132, i.e. the parts corresponding to computing entity-specific coefficient vector elements being deemed relevant/important by the computing entity 210.
  • the computing entity 230-i may apply the constraint that all elements of its computing entity-specific coefficient vector ⁇ i are non-negative.
  • a computing entity 230-i may also be referred to as a computing NF
  • the coordinating entity 210 may also be referred to as a coordinating NF, meaning that the functionality offered by the entities 210 and 230 may be provided as part of various NFs, and similar.
  • each entity 210 and 230 may expose an interface through which it may e.g. receive parameter values, and through which it may distribute one or more parameter values (or other values) calculated based on the received parameter values/requests, e.g. by solving one or more regression (sub) problems as described herein.
  • ADMM Alternating Direction Method of Multipliers
  • the problem (5) is a smaller non-negative LASSO-type problem that may be solved individually by the computing entity 230-i by different methods such as quadratic programming, projected gradient descent and similar.
  • the problem (6) is a typical non-constrained optimization problem that may be solved by the coordinating entity 210 using methods like gradient descent and similar.
  • the problem (7) of finding may be solved by each computing entity 230 once the value of the parameter is found and communicated by the coordinating entity 210.
  • variable/parameter may be referred to as a (computing entity-specific) slack-variable, while the variable/parameter may be referred to as a (computing entity-specific) dual variable, or e.g. as a (computing entity-specific) scaled dual variable.
  • the superscript k indicates iteration step, i.e. the problems are solved iteratively, until e.g. one or more stop criteria are met.
  • a criterion may be that
  • is a cutoff threshold value.
  • Such a stopping criteria may be evaluated on/by the coordinating entity 210.
  • Figure 6 schematically illustrates a signaling scheme 600, illustrating how data may be communicated between the coordinating entity 210 and e.g. the computing entity 230-i as part of performing the distributed optimization envisaged herein. As mentioned earlier herein, it is assumed that each computing entity 230 operates similarly to what is described for the computing entity 230-i.
  • the coordinating entity 210 receives the request 120.
  • the computing entity 230-i initializes the parameters and (for example as vectors with random number entries) , and also receives an initial parameter value calculated/defined by the coordinating entity 210 as part of e.g. an operation S421.
  • the operation S421 may for example include setting the entities of as random numbers. In other examples, may be defined locally by each computing entity 230-i, e.g. with random number entries, in which case operation S421 is optional.
  • the computing entity 230-i may communicate the values of and to the coordinating entity 210. In any example, it is envisaged that parameter values may be communicated between the coordinating entity 210 and computing entity 230-i as needed, such that each entity uses the same value for a same parameter for a same iteration.
  • the scheme 600 then progresses iteratively, wherein the problems (5) - (7) are iteratively solved and information exchanged between the coordinating entity 210 and the computing entity 230-i as required.
  • the computing entity 230-i locally calculates as defined by problem (5) , as part of e.g. an operation S531 (i.e. as part of a suboperation of the operation S530 of the method 500) .
  • the coordinating entity 210 calculates (for each computing entity 230) as defined by problem (6) , e.g. as part of a suboperation S431 of the operation S430 of the method 400.
  • the coordinating entity 210 sends to computing entity 230-i, that receives as part of a suboperation S521 of the operation S520.
  • the computing entity 230-i calculates as defined by problem (7) , e.g. as part of a suboperation S532 of operation S530.
  • the computing entity 230-i sends/provides to the coordinating entity 210 (as part of e.g. the suboperation S532) , and the coordinating entity 210 receives as part of a suboperation S432 of the operation S430.
  • the computing entity 230-i may proceed with calculating the coordinating entity 210 may proceed with calculating and so on, until convergence is reached.
  • the computing entity 230-i may define its computing entity-specific coefficient vector ⁇ i and, as part of the operation S540, and provide this vector ⁇ i (or e.g. only the relevant entries thereof) to the coordinating entity 210.
  • the coordinating entity 210 may thus gather the computing-entity specific coefficient vector elements from the various computing entities 230 as part of e.g. a suboperation S451 of the operation S450.
  • the coordinating entity 210 may identify the responses 152 (i.e. feature query vectors) that are pertinent to the request 120 (as part of the operation S460) , and may optionally continue (as part of the operation S470) by returning back the answer 154 to the requesting NF 122. If/once a new request 120 arrives, the process indicated in Figure 6 may be repeated.
  • FIG. 7A schematically illustrates, in terms of a number of functional units, the components of an example NF entity 700 according to the present disclosure (such as the NF 110) .
  • the NF entity 700 may form part of a communications network, such as a telecommunications network and be configured to perform e.g. one or more of the various examples of the method 300 of identifying one or more responses to a request as described earlier herein.
  • the NF entity 700 includes processing circuitry 710.
  • the processing circuitry 710 is provided using any combination of one or more of a suitable central processing unit (CPU) , multiprocessor, microcontroller, digital signal processor (DSP) , etc., capable of executing software instructions stored in a computer program product 1010a (see Figure 10 and the description thereof) , e.g. in form of a storage medium/memory 720 that may also form part of the NF entity 700.
  • the processing circuitry 710 may further be provided as at least one application specific integrated circuit (ASIC) , or field-programmable gate array (FPGA) .
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the processing circuitry 710 is configured to cause the NF entity 700 to perform a set of operations, or steps, as disclosed above e.g. when describing the method 300 illustrated in Figure 3.
  • the storage medium 720 may store a set of operations
  • the processing circuitry 710 may be configured to retrieve the set of operations from the storage medium 720 to cause the NF entity 700 to perform the set of operations.
  • the set of operations may be provided as a set of executable instructions.
  • the processing circuitry 710 is thereby arranged to execute examples of a method associated with identifying one or more responses to a request as disclosed herein e.g. with reference to Figures 1 and 3.
  • the storage medium 720 may also include persistent storage, which, for example, can be any single or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • the NF entity 700 may further include a communications interface 730 for communications with other entities, functions, nodes, and devices, such as e.g. those of a telecommunications network or any other network associated with the task to be solved.
  • the communications interface 730 may allow the NF entity 700 to communicate with e.g. other entities, with one or more other NFs or nodes of a telecommunications network, or e.g. with one or more internal operational modules within a same node, etc.
  • the communication interface 730 may include one or more transmitters and receivers, including analogue and/or digital components.
  • the processing circuitry 710 controls the general operation of the NF entity 700 e.g. by sending data and control signals to the communications interface 730 and the storage medium 720, by receiving data and reports from the communications interface 730, and by retrieving data and instructions from the storage medium 720.
  • Other components, as well as their related functionality, of the NF entity 700 are omitted in order not to obscure the concepts presented herein.
  • FIG. 7B schematically illustrates, in terms of a number of functional modules 710a, 710b and 710c, the components of an NF entity 700 according to one or more examples of the present disclosure.
  • the NF entity 700 includes at least a first module 710a configured to perform one or more of operations S310 and S320 of the method 300 described with reference to Figure 3.
  • the module 710a may be referred to as an “obtaining module” , “obtain module” or similar.
  • the NF entity 700 also includes a second module 710b configured to perform operation S330.
  • the module 710b may be referred to as a “regression module” , “regressor” or similar.
  • the NF entity 700 further includes a third module 710c configured to perform operation S340 of the method 300.
  • the module 710c may be referred to as an “identifying module” , an “identification module” , or similar. In other examples, two or more of the modules 710a to 710c may instead be provided as part of a single module, e.g. as part of a combined obtain/regress/identify module.
  • the NF entity 700 may also include one or more optional functional modules (illustrated by the dashed box 710d) , such as for example performing the (optional) operation S350, or e.g. to implement the operation S332 and any of the suboperations S335, S336 and S337 in case of a distributed setting.
  • each functional module 710a-d may be implemented in hardware or in software.
  • one or more or all functional modules 710a-d may be implemented by the processing circuitry 710, possibly in cooperation with the communications interface 730 and/or the storage medium 720.
  • the processing circuitry 710 may thus be arranged to from the storage medium 720 fetch instructions as provided by a functional module 710a-d, and to execute these instructions and thereby perform any operations of any example of the method 300 performed by/in the NF entity 700 as disclosed herein.
  • FIG 8A schematically illustrates, in terms of a number of functional units, the components of an example coordinating entity 800 according to the present disclosure (such as the entity 210) .
  • the coordinating entity 800 may form part of a communications network, such as a telecommunications network and be configured to perform e.g. one or more of the various examples of the method 400 of assisting in identifying one or more responses to a request as described earlier herein.
  • the coordinating entity 800 includes processing circuitry 810.
  • the processing circuitry 810 is provided using any combination of one or more of a suitable central processing unit (CPU) , multiprocessor, microcontroller, digital signal processor (DSP) , etc., capable of executing software instructions stored in a computer program product 1010b (see Figure 10 and the description thereof) , e.g. in form of a storage medium/memory 820 that may also form part of the coordinating entity 800.
  • the processing circuitry 810 may further be provided as at least one application specific integrated circuit (ASIC) , or field-programmable gate array (FPGA) .
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the processing circuitry 810 is configured to cause the coordinating entity 800 to perform a set of operations, or steps, as disclosed above e.g. when describing the method 400 illustrated in Figure 4.
  • the storage medium 820 may store a set of operations
  • the processing circuitry 810 may be configured to retrieve the set of operations from the storage medium 820 to cause the coordinating entity 800 to perform the set of operations.
  • the set of operations may be provided as a set of executable instructions.
  • the processing circuitry 810 is thereby arranged to execute examples of a method associated with assisting in identifying one or more responses to a request as disclosed herein e.g. with reference to Figures 4 and 6.
  • the storage medium 820 may also include persistent storage, which, for example, can be any single or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • the coordinating entity 800 may further include a communications interface 830 for communications with other entities, functions, nodes, and devices, such as e.g. those of a telecommunications network or any other network associated with the task to be solved, such as with one or more computing entities.
  • the communications interface 830 may allow the coordinating entity 800 to communicate with e.g. other entities, with one or more other NFs or nodes of a telecommunications network, or e.g. with one or more internal operational modules within a same node, etc.
  • the communication interface 830 may include one or more transmitters and receivers, including analogue and/or digital components.
  • the processing circuitry 810 controls the general operation of the coordinating entity 800 e.g. by sending data and control signals to the communications interface 830 and the storage medium 820, by receiving data and reports from the communications interface 830, and by retrieving data and instructions from the storage medium 820.
  • Other components, as well as their related functionality, of the coordinating entity 800 are omitted in order not to obscure the concepts presented herein.
  • FIG 8B schematically illustrates, in terms of a number of functional modules 810a, 810b and 810c, the components of a coordinating entity 800 according to one or more examples of the present disclosure.
  • the coordinating entity 800 includes at least a first module 810a configured to perform operation S410 of the method 400 described with reference to Figure 4.
  • the module 810a may be referred to as an “obtaining module” , “obtain module” or similar.
  • the coordinating entity 800 also includes a second module 810b configured to perform operation S430.
  • the module 810b may be referred to as a “regression module” , “regressor” or similar.
  • the coordinating entity 800 further includes a third module 810c configured to perform one or more of operations S440, S450 and S460 of the method 400.
  • the module 810c may be referred to as an “distribute/gather/identify module” , an “distribution/gathering/identifying module” , or similar.
  • two or more of the modules 810a to 810c may instead be provided as part of a single module, e.g. as part of a combined obtain/regress/distribute/gather/identify module, or two or more of the modules 810a to 801c may instead be split into a larger number of modules, such as e.g. a separate module for each of the operations S440, S450 and S460.
  • the coordinating entity 800 may also include one or more optional functional modules (illustrated by the dashed box 810d) , such as for example performing the (optional) operation S470.
  • each functional module 810a-d may be implemented in hardware or in software.
  • one or more or all functional modules 810a-d may be implemented by the processing circuitry 810, possibly in cooperation with the communications interface 830 and/or the storage medium 820.
  • the processing circuitry 810 may thus be arranged to from the storage medium 820 fetch instructions as provided by a functional module 810a-d, and to execute these instructions and thereby perform any operations of any example of the method 400 performed by/in the coordinating entity 800 as disclosed herein.
  • FIG. 9A schematically illustrates, in terms of a number of functional units, the components of an example computing entity 900 according to the present disclosure (such as the entity 230-i) .
  • the computing entity9 may form part of a communications network, such as a telecommunications network and be configured to perform e.g. one or more of the various examples of the method 500 of assisting in identifying one or more responses to a request as described earlier herein.
  • the computing module 900 includes processing circuitry 910.
  • the processing circuitry 910 is provided using any combination of one or more of a suitable central processing unit (CPU) , multiprocessor, microcontroller, digital signal processor (DSP) , etc., capable of executing software instructions stored in a computer program product 1010c (see Figure 10 and the description thereof) , e.g. in form of a storage medium/memory 920 that may also form part of the computing entity 900.
  • the processing circuitry 910 may further be provided as at least one application specific integrated circuit (ASIC) , or field-programmable gate array (FPGA) .
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the processing circuitry 910 is configured to cause the computing entity 900 to perform a set of operations, or steps, as disclosed above e.g. when describing the method 500 illustrated in Figure 5.
  • the storage medium 920 may store a set of operations
  • the processing circuitry 910 may be configured to retrieve the set of operations from the storage medium 920 to cause the computing entity 900 to perform the set of operations.
  • the set of operations may be provided as a set of executable instructions.
  • the processing circuitry 910 is thereby arranged to execute examples of a method associated with assisting in identifying one or more responses to a request as disclosed herein e.g. with reference to Figures 5 and 6.
  • the storage medium 920 may also include persistent storage, which, for example, can be any single or combination of magnetic memory, optical memory, solid state memory or even remotely mounted memory.
  • the computing entity 900 may further include a communications interface 930 for communications with other entities, functions, nodes, and devices, such as e.g. those of a telecommunications network or any other network associated with the task to be solved, such as with one or more coordinating entities.
  • the communications interface 930 may allow the computing entity 900 to communicate with e.g. other entities, with one or more other NFs or nodes of a telecommunications network, or e.g. with one or more internal operational modules within a same node, etc.
  • the communication interface 930 may include one or more transmitters and receivers, including analogue and/or digital components.
  • the processing circuitry 910 controls the general operation of the computing entity 900 e.g. by sending data and control signals to the communications interface 930 and the storage medium 920, by receiving data and reports from the communications interface 930, and by retrieving data and instructions from the storage medium 920.
  • Other components, as well as their related functionality, of the computing entity 900 are omitted in order not to obscure the concepts presented herein.
  • FIGB schematically illustrates, in terms of a number of functional modules 910a, 910b and 910c, the components of a computing entity9 according to one or more examples of the present disclosure.
  • the computing entity 900 includes at least a first module 910a configured to perform operation S520 of the method 500 described with reference to Figure 5.
  • the module 910a may be referred to as an “obtaining module” , “obtain module” or similar.
  • the computing entity 900 also includes a second module 910b configured to perform operation S530.
  • the module 910b may be referred to as a “ (local) regression module” , “ (local) regressor” or similar.
  • the computing entity 900 further includes a third module 910c configured to perform operation S540 of the method 500.
  • the module 910c may be referred to as an “provide module” , a “provision module” , or similar. In other examples, two or more of the modules 910a to 910c may instead be provided as part of a single module, e.g. as part of a combined obtain/regress/provide module, or two or more of the modules 910a to 901c may instead be split into a larger number of modules.
  • the computing entity 900 may also include one or more optional functional modules (illustrated by the dashed box 910d) , as/if required to perform any other functionality of the computing entity 900.
  • each functional module 910a-d may be implemented in hardware or in software.
  • one or more or all functional modules 910a-d may be implemented by the processing circuitry 910, possibly in cooperation with the communications interface 930 and/or the storage medium 920.
  • the processing circuitry 910 may thus be arranged to from the storage medium 920 fetch instructions as provided by a functional module 910a-d, and to execute these instructions and thereby perform any operations of any example of the method 500 performed by/in the computing entity 900 as disclosed herein.
  • Figure 10 schematically illustrates an example computer program product 1010a, including computer readable means 1030.
  • a computer program 1020a can be stored, which computer program 1020a can cause the processing circuitry 710 and thereto operatively coupled entities and devices, such as the communication interface 730 and the storage medium 720, of the NF entity 700 to execute one or more of the examples of a method 300 as described with reference to e.g. Figures 1, 3 and 6.
  • the computer program 1020a and/or computer program product 1010a may thus provide means for performing any operations of any method 300 performed by the NF entity 700 as disclosed herein.
  • Figure 10 also schematically illustrates an example computer program product 1010b, in which a computer program 1020b is stored (either alone or in addition to the program 1020a) on the computer readable means 1030.
  • the computer program 1020b can cause the processing circuitry 810 and thereto operatively coupled entities and devices, such as the communication interface 830 and the storage medium 820, of the coordinating entity 800 to assist in identifying responses to a request in accordance with e.g. any example of the method 400 as described herein with reference to e.g. Figures 2, 4 and 6.
  • the computer program 1020b and/or computer program product 1010b may thus provide means for performing any operations of any method 400 performed by the coordinating entity 800 as disclosed herein.
  • Figure 10 also schematically illustrates an example computer program product 1010c, in which a computer program 1020c is stored (either alone or in addition to one or more of the programs 1020a and 1020b) on the computer readable means 1030.
  • the computer program 1020c can cause the processing circuitry 910 and thereto operatively coupled entities and devices, such as the communication interface 930 and the storage medium 920, of the computing entity 900 to assist in identifying responses to a request in accordance with e.g. any example of the method 500 as described herein with reference to e.g. Figures 2, 5 and 6.
  • the computer program 1020c and/or computer program product 1010c may thus provide means for performing any operations of any method 500 performed by the computing entity 900 as disclosed herein.
  • the computer program products 1010a, 1010b, 1010c and computer readable means 1030 are illustrated as an optical disc, such as a CD (compact disc) or a DVD (digital versatile disc) or a Blu-Ray disc.
  • the computer program products 1010a, 1010b, 1010c and computer readable means 1030 could also be embodied as a memory, such as a random-access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM) , or an electrically erasable programmable read-only memory (EEPROM) and more particularly as a non-volatile storage medium of a device in an external memory such as a USB (Universal Serial Bus) memory or a Flash memory, such as a compact Flash memory.
  • RAM random-access memory
  • ROM read-only memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically erasable programmable read-only memory
  • a computer-readable storage medium may also be transitory and e.g. correspond to a signal (electrical, optical, mechanical, or similar) present on e.g. a communication link, wire, or similar means of signal transferring.
  • FIG 11 schematically illustrates examples of a telecommunications network 1100 as envisaged herein.
  • the telecommunications network 110 includes at least one of i) the NF entity 700 and ii) the coordinating entity 800 and a plurality of the computing entities 900-1, 900-2, ..., 900-M (i.e. M such computing entities in total) .
  • Each of the NF entity 700 and coordinating entity 800 are configured to communicate with a requesting NF 122, in order to receive the request from the requesting NF 122.
  • the requesting NF 122 may form part of the telecommunications network 1100.
  • the coordinating and computing entities 800 and 900-i may form part of the NF entity 700.
  • Figure 12A schematically illustrates a general overview 1200, in which a request 1210 is embedded (using e.g. an embedding model 1212) into a query feature vector 1214 (i.e. y) , and in which a plurality of candidate responses 1220 are embedded (using e.g. an embedding model 1222 that may be the same as the model 1212) to form corresponding candidate feature vectors 1224 (i.e. ⁇ x j ⁇ ) .
  • the request 1210/query feature vector 1214 and candidate responses 1220/feature vectors 1224 are then processed by e.g. the NF 110 (as implemented using e.g.
  • the NF entity 700 and/or coordinating entity 210 and plurality of computing entities 230-i (as implemented using e.g. the coordinating entity 800 and the plurality of computing entities 900) , in order to provide and output 1250 in form of one or more responses identified as being pertinent to the request 1210 (or e.g. an answer formed from such one or more pertinent responses) .
  • the NF 110 and/or entities 210/230-i) is configured to identify the pertinent responses using regression with a constraint on the coefficient vector elements (e.g.
  • ⁇ 0 ⁇ 0
  • a similar vector searching problem here illustrated as attempting to find, out of a plurality of candidate feature vectors 1242 and 1244, only such feature vectors 1244 that are sufficiently close to a query feature vector 1240 (e.g. lying within a hypothetical circle 1246 of some high-dimensional feature vector space) .
  • a radius (or equivalent) of the circle 1246 is predefined/fixed, and/or e.g. adapted to return a fixed number of pertinent candidate feature vectors 1244
  • the solution proposed herein instead relies on regression and on inferring the number of pertinent candidate feature vectors 1244 based on the number of positive coefficient vector elements after such regression.
  • the proposed solution is particularly useful in a distributed setting, as e.g. no sensitive KB-data is needed to be shared across different domains, e.g. between computing entities 230 and the coordinating entity 210 as part of solving the (distributed) regression problem.
  • the dashed box 1230 it is envisaged to use distributed scope reduction in order to reduce the computational complexity, illustrated in Figure 12A by the dashed box 1230.
  • distributed nearest neighbor algorithm using a fixed number of nearest neighbors.
  • this number may be large, e.g. 1000, and the candidate feature vectors remaining after such a reduction may be used as the plurality of candidate responses/feature vectors 1220/1224.
  • distributed nearest neighbor algorithms available, such as e.g.
  • kNN distributed k nearest neighbor
  • FIG. 12B schematically illustrates a particular use-case 1201 of the overview 1200, in which the envisaged methods of the present disclosure are used in an LLM-driven QA service.
  • local documents 1260 are loaded by a document loader 1261 to provide text output 1262.
  • the text output 1262 is then split into text chunks using a text splitter 1262, to generate the plurality of candidate responses 1220.
  • a natural language query is received via e.g. a prompting tool, as e.g. the request 1210.
  • the output 1250 are provided to a first module 1270 responsible for identifying the relevant text chunks based on the output/identified responses 1250.
  • the relevant text chunks are provided to a prompt template 1271 in order to create a prompt 1272.
  • an answer 1274 is then provided in response to the incoming query.
  • a model/architecture may be referred to as Retrieval Augmented Generation (RAG) , and has been shown successful in implementing LLM-driven QA systems.
  • RAG Retrieval Augmented Generation
  • the proposed methods and (NF) entities may be particular useful in implementing such a system, especially as the need for predefined k-values is eliminated and also as distributed settings in which KBs are distributed among e.g. multiple (NF) entities can also be used.
  • Figure 12C schematically illustrates a different envisaged use-case 1202 that may also benefit from the proposed methods and (NF) entities of the present disclosure.
  • the use-case 1202 is not a LLM-example but instead relates to classification.
  • an entity to be predicted 1280 may be converted to a feature vector (e.g. to the request 1210) by the use of a feature extraction method/model 1281.
  • a plurality of entities with labels 1282 may be converted to feature vectors via a feature extraction method/model 1283 (that may be the same as or similar to the method/model 1281) .
  • the envisaged solution may be used to perform similar vector searching in order to find the most pertinent entities among the entities 1282, and these entities may be provided as a response to the query, using e.g. a classification result module 1284.
  • entities may e.g. be images, audio files, video files, or similar, whose contents may be labeled.
  • Figure 12D schematically illustrates yet another different envisaged use-case 1203 that may also benefit from the proposed methods and (NF) entities of the present disclosure.
  • the use-case 1203 is a log analytics service, and may include providing a log to be predicted 1290 to a feature extractor 1291 in order to generate the request 1210 (or even the query feature vector 1214) , and providing a plurality of historical logs (of some activity) 1292 to a same or similar feature extractor 1293 in order to generate the plurality of responses 1220 (or even the candidate feature vectors 1224) .
  • the output 1250 may be provided to e.g. a similar historical log module 1294, in order to identify the historical logs most pertinent to the log 1290.
  • envisaged methods and/or (NF) entities of the present disclosure are applicable for multiple different use-cases.
  • envisaged solution (s) of the present disclosure can be complementary to existing vector databases, as the core functionality of a vector database is to efficiently search for similar feature vectors, and which is often required/performed as part of various Artificial Intelligence (AI) /Machine Learning (ML) systems.
  • AI Artificial Intelligence
  • ML Machine Learning
  • FIG. 13 schematically illustrates a scenario 1300 in which the envisaged solution is used as part of a Network Data Analytics Function (NWDAF) or NWDAF framework of a telecommunications network.
  • NWDAF is a function provided by contemporary Fifth-generation (5G) telecommunications networks in order to perform analytics including training/inference of ML models and similar.
  • 5G Fifth-generation
  • NWDAF architectures are limited to specific requests that are API-driven, meaning that a specific API is exposed to handle fixed request.
  • the design of the API may take considerable time to finalize, as part of being a product of a standardization process.
  • one such contemporary API may receive an Analytics ID, an Area of Interest, a measurement duration and an aggregation function via the API.
  • such an API may instead be amended to support queries written using natural language, such as in the English language, which may allow for more versatile queries to be supported without e.g. a need to change/introducing any new interfaces and instead maintaining only a single API, in line with the outspoken goal of the 3GPP foundation to reduce the number of NFs and APIs in order to reduce system complexity and cost of standardization.
  • the present disclosure proposes an extension to NWDAF as illustrated in Figure 13, which leverages the various mechanisms proposed herein in order to enable queries expressed in natural language and also across different administrative domains (e.g. between different network operators) , while taking into account/preserving privacy of private KBs.
  • an NF consumer 1310 may provides requests (using e.g. natural language) to a Data Collection Co-ordination (& Delivery) Function (DCCF) 1320.
  • the NWDAF may host a LLM 1360 (as part of e.g. an LLM microservice) , and this LLM may be use the proposed solution of the present disclosure to identify e.g. relevant documents (represented as candidate feature vectors as described herein) for a given query in different administrative domains via the DCCF 1320.
  • the endpoint for the NL based query from the NF consumer 1310 is retrieved with a NWDAF Analytical Logical Function (AnLF) , and the DCCF 1320 implements a query controller 1330 which is a proposed extension to contemporary NWDAF, and which uses the mechanisms provided herein in order to search for relevant documents in one or more Analytical Data Repository Functions (ADRFs) 1370-1, 1370-2, ..., 1370-N (where N is here the number of such ADRFs) .
  • the query controller 1330 then returns such documents back to be used by the LLM in order to produce a response to the given query.
  • at least part of an ADRF may be implemented by a computing entity 230 as envisaged herein.
  • at least part of the DCCF (including e.g. the query controller 1330) may be implemented by the coordinating entity 210 as envisaged herein.
  • the present disclosure improves upon contemporary technology in that it provides, within a telecommunications network, to more efficiently find responses pertinent to an incoming request, such as which parts of one or more KBs that are pertinent to an incoming query, where the query may e.g. be a natural language query or similar.
  • the proposed method uses inference to assess which candidate feature vectors that are sufficiently close to a query feature vector, and does not require to e.g. predefine a number k of candidate feature vectors that are to be considered as being similar to the query feature vector.
  • the proposed method thus reduces the risk of e.g. missing relevant information as part of a response to the query, and also reduces the risk of consuming unnecessary computational resources by e.g.
  • the proposed solution enables to improve the quality of e.g. a QA system while keeping workload in control.
  • the proposed solution is also, as has been demonstrated herein, capable of handling distributed scenarios wherein knowledge/information is found distributed e.g. over multiple storages, and wherein sharing of information between such storages is not desirable.
  • the proposed solution can also be migrated to other use-cases than e.g. LLM-driven QA systems, and be used for e.g. analyzing test result logs, classification of various entities, and similar.
  • the present solution is proposed as suitable for use in an NWDAF or NWDAF framework, in order to enable the use of natural language queries between NFs or e.g. between an NF and NF consumer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

L'invention concerne un procédé d'identification d'une ou de plusieurs réponses (152) à une demande (120). Le procédé est mis en œuvre en tant que partie d'au moins une fonction de réseau (NF; 110) d'un réseau de télécommunications, et consiste à obtenir une demande (120) à partir d'une NF demandeuse (122); à obtenir une pluralité de réponses candidates (130); à calculer, par réalisation d'une régression (140) sur la base de la demande et de réponses candidates, des éléments d'un vecteur de coefficient (β) soumis à une contrainte sur les éléments, et à identifier, parmi la pluralité de réponses candidates, une ou plusieurs réponses (152) pertinentes pour la demande sur la base d'un ou de plusieurs des éléments calculés du vecteur de coefficient. L'invention concerne également un procédé similaire utilisant une optimisation distribuée, approprié pour des réglages dans lesquels les réponses candidates sont distribuées sur de multiples stockages, ainsi que des entités NF correspondantes, des entités de coordination, des entités informatiques, un réseau de télécommunications et divers programmes informatiques et produits programmes informatiques.
PCT/CN2024/091088 2024-05-03 2024-05-03 Interaction de fonction de réseau améliorée dans des réseaux de télécommunications Pending WO2025227423A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2024/091088 WO2025227423A1 (fr) 2024-05-03 2024-05-03 Interaction de fonction de réseau améliorée dans des réseaux de télécommunications

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2024/091088 WO2025227423A1 (fr) 2024-05-03 2024-05-03 Interaction de fonction de réseau améliorée dans des réseaux de télécommunications

Publications (1)

Publication Number Publication Date
WO2025227423A1 true WO2025227423A1 (fr) 2025-11-06

Family

ID=91375801

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/091088 Pending WO2025227423A1 (fr) 2024-05-03 2024-05-03 Interaction de fonction de réseau améliorée dans des réseaux de télécommunications

Country Status (1)

Country Link
WO (1) WO2025227423A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955951A (zh) * 2014-05-09 2014-07-30 合肥工业大学 基于正则化模板与重建误差分解的快速目标跟踪方法
US20220132358A1 (en) * 2020-10-28 2022-04-28 At&T Intellectual Property I, L.P. Network function selection for increased quality of service in communication networks
US20220253611A1 (en) * 2017-05-10 2022-08-11 Oracle International Corporation Techniques for maintaining rhetorical flow

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955951A (zh) * 2014-05-09 2014-07-30 合肥工业大学 基于正则化模板与重建误差分解的快速目标跟踪方法
US20220253611A1 (en) * 2017-05-10 2022-08-11 Oracle International Corporation Techniques for maintaining rhetorical flow
US20220132358A1 (en) * 2020-10-28 2022-04-28 At&T Intellectual Property I, L.P. Network function selection for increased quality of service in communication networks

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HENRY NEEBCHRISTOPHER KURRUS: "Distributed K-Nearest Neighbors", 2016, OPTIMIZATION BY STANFORD UNIVERSITY

Similar Documents

Publication Publication Date Title
US20210194978A1 (en) Push Notification Delivery System with Feedback Analysis
US10776107B2 (en) Microservice-based data processing apparatus, method, and program
EP2674875B1 (fr) Procédé, contrôleur, programme et système de stockage de données permettant d'effectuer un traitement de réconciliation
US12118334B1 (en) Determination of schema compatibility between neighboring operators within a search query statement
US11301425B2 (en) Systems and computer implemented methods for semantic data compression
US20230106416A1 (en) Graph-based labeling of heterogenous digital content items
US20230096118A1 (en) Smart dataset collection system
WO2016118979A4 (fr) Systèmes, procédés et dispositifs destinés à une plateforme d'applications d'internet des objets (iot) en entreprise
CN116048817B (zh) 数据处理控制方法、装置、计算机设备和存储介质
US20180074797A1 (en) Transform a data object in a meta model based on a generic type
US9026539B2 (en) Ranking supervised hashing
CN111967253A (zh) 一种实体消歧方法、装置、计算机设备及存储介质
US20190080290A1 (en) Updating messaging data structures to include predicted attribute values associated with recipient entities
US11928423B1 (en) Transaction entity prediction through learned embeddings
WO2025227423A1 (fr) Interaction de fonction de réseau améliorée dans des réseaux de télécommunications
US20250278275A1 (en) Systems and methods for handling macro compatibility for documents at a storage system
US11514083B2 (en) Data processing system and data processing method
US10872103B2 (en) Relevance optimized representative content associated with a data storage system
US20210141791A1 (en) Method and system for generating a hybrid data model
US12355713B2 (en) Method and system for automated message generation
US20240112011A1 (en) Continual machine learning in a provider network
CN112148902A (zh) 数据处理方法、装置、服务器及存储介质
US20250291854A1 (en) Ensemble augmentation with enhanced knowledge extraction techniques
US20250200665A1 (en) Systems and methods for accuracy-enhanced processing of query data within a multi-digital agent architecture
CN112185494B (zh) 数据存储方法、装置、计算机设备和存储介质