[go: up one dir, main page]

WO2025074448A1 - Système et procédé de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles d'intelligence artificielle (ia) - Google Patents

Système et procédé de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles d'intelligence artificielle (ia) Download PDF

Info

Publication number
WO2025074448A1
WO2025074448A1 PCT/IN2024/052014 IN2024052014W WO2025074448A1 WO 2025074448 A1 WO2025074448 A1 WO 2025074448A1 IN 2024052014 W IN2024052014 W IN 2024052014W WO 2025074448 A1 WO2025074448 A1 WO 2025074448A1
Authority
WO
WIPO (PCT)
Prior art keywords
models
selection
execution sequence
dataset
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IN2024/052014
Other languages
English (en)
Inventor
Aayush Bhatnagar
Ankit Murarka
Jugal Kishore
Chandra GANVEER
Sanjana Chaudhary
Gourav Gurbani
Yogesh Kumar
Avinash Kushwaha
Dharmendra Kumar Vishwakarma
Sajal Soni
Niharika PATNAM
Shubham Ingle
Harsh Poddar
Sanket KUMTHEKAR
Mohit Bhanwria
Shashank Bhushan
Vinay Gayki
Aniket KHADE
Durgesh KUMAR
Zenith KUMAR
Gaurav Kumar
Manasvi Rajani
Kishan Sahu
Sunil Meena
Supriya KAUSHIK DE
Mehul Tilala
Satish Narayan
Rahul Kumar
Harshita GARG
Kunal Telgote
Ralph LOBO
Girish DANGE
Kumar Debashish
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jio Platforms Ltd
Original Assignee
Jio Platforms Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jio Platforms Ltd filed Critical Jio Platforms Ltd
Publication of WO2025074448A1 publication Critical patent/WO2025074448A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present invention relates to the field of wireless communication systems, more particularly relates to a method and a system for managing selection and execution sequence of one or more Artificial Intelligence (Al) models.
  • Al Artificial Intelligence
  • One or more embodiments of the present disclosure provides a method and a system for managing selection and execution sequence of one or more Al models.
  • the method for managing selection and execution sequence of one or more Artificial Intelligence (Al) models includes the step of analysing, by one or more processors, a request received from a user to identify at least a type of task to be performed.
  • the method further includes the step of generating, by the one or more processors, a list comprising the one or more Al models to perform the task based on the analysis of the request.
  • the method further includes the step of receiving, by the one or more processors, an input from the user corresponding to selection of the one or more Al models from the generated list and an execution sequence of the selected one or more Al models.
  • the method further includes the step of providing, by the one or more processors, feedback corresponding to the selection of the one or more Al models and the execution sequence of the one or more Al models so as to modify the selection and the execution sequence of the Al models.
  • the request comprises a dataset and one or more characteristics corresponding to each of the dataset are identified by the one or more processors based on analysis
  • the type of task is identified based on the analysis of the one or more characteristics corresponding to each of the dataset of the request and wherein the one or more characteristics of the dataset comprises size, dimensionality, and datatypes.
  • the type of the task is at least, classification, regression, and clustering.
  • the list of the one or more Al models comprises the one or more Al models, wherein the generated list is transmitted to the UE.
  • the method comprises the step of generating, by the one or more processors, a visual representation of the execution sequence of one or more selected Al models on at least one of the UE and the UI.
  • modifying the selection and the execution sequence of the one or more Al models comprises the step of receiving, by the one or more processors, a modification input from the user based on the feedback.
  • the modification input corresponds to modification of one of the execution sequence and one or more parameters of each of the one or more selected Al models, the one or more parameters are at least a learning rate, a regularization strength, and a batch size of each of the one or more Al models.
  • the method comprises step of storing, by the one or more processors, logs related to performance metrics corresponding to training of the one or more selected Al model utilizing the dataset and wherein the performance metrics are at least one of an accuracy, a loss, and a convergence rate.
  • the system for managing selection and execution sequence of one or more Al models includes an analysing unit configured to analyse, a request received from a user to identify at least a type of task to be performed.
  • the system further includes a generating unit configured to generate, a list comprising the one or more Al models to perform the task based on the analysis of the request.
  • the system further includes a receiving unit configured to receive, an input from the user corresponding to selection of one or more Al models from the generated list and an execution sequence of the selected one or more Al models.
  • the system further includes a feedback unit configured to provide, feedback corresponding to the selection of the one or more Al models and the execution sequence of the one or more Al models so as to modify the selection and the execution sequence of the one or more Al models.
  • a non -transitory computer- readable medium having stored thereon computer-readable instructions that, when executed by a processor.
  • the processor is configured to analyse, a request received from at least one of a user to identify at least a type of task to be performed.
  • the processor is further configured to generate, a list comprising the one or more Al models to perform the task based on the analysis of the request.
  • the processor is further configured to receive, an input from the user corresponding to selection of one or more Al models from the generated list and an execution sequence of the selected one or more Al models.
  • the processor is further configured to provide, feedback corresponding to the selection of the one or more Al models and the execution sequence of the one or more Al models so as to modify the selection and the execution sequence of the Al models.
  • FIG. 1 is an exemplary block diagram of an environment for managing selection and execution sequence of one or more Al models, according to one or more embodiments of the present invention
  • FIG. 2 is an exemplary block diagram of a system for managing the selection and the execution sequence of one or more Al models, according to one or more embodiments of the present invention
  • FIG. 3 is an exemplary architecture of the system of FIG. 2, according to one or more embodiments of the present invention
  • FIG. 4 is an exemplary architecture for managing the selection and the execution sequence of one or more Al models, according to one or more embodiments of the present disclosure
  • FIG. 5 is an exemplary signal flow diagram illustrating the flow for managing the selection and the execution sequence of one or more Al models, according to one or more embodiments of the present disclosure.
  • FIG. 6 is a flow diagram of a method for managing the selection and the execution sequence of one or more Al models, according to one or more embodiments of the present invention.
  • Various embodiments of the present invention provide a system and a method for managing a selection and execution sequence of one or more Artificial Intelligence (Al) models.
  • the present invention is including an interface which provides users with a transparent and intuitive way to select the one or more Al models and arrange the one or more selected Al models in an optimal execution sequence, leading to better decision-making and potentially higher one or more Al models performance.
  • the present invention enhances the system ability by enabling the user to adjust or fine tune the execution sequence of the one or more selected Al models as per the requirements.
  • FIG. 1 illustrates an exemplary block diagram of an environment 100 for managing a selection and execution sequence of one or more Artificial Intelligence (Al) models 220, according to one or more embodiments of the present invention.
  • the environment 100 includes a User Equipment (UE) 102, a server 104, a network 106, and a system 108.
  • the one or more Al models refer to different frameworks or paradigms for solving problems or performing tasks using logic.
  • the one or more Al models 220 represent various approaches to logic design and problem-solving, each suited to different types of tasks.
  • the disclosed system 108 enables a user to select the one or more Al models 220 among a plurality of the Al models within the system 108 for the given task.
  • the system 108 enables the user to arrange the selected one or more Al models 220 in the optimal execution sequence for chaining the one or more Al models 220 together in order to execute the task.
  • UEs user equipment
  • Each of the at least one UE 102 namely the first UE 102a, the second UE 102b, and the third UE 102c is configured to connect to the server 104 via the network 106.
  • each of the first UE 102a, the second UE 102b, and the third UE 102c is one of, but not limited to, any electrical, electronic, electromechanical or an equipment and a combination of one or more of the above devices such as smartphones, Virtual Reality (VR) devices, Augmented Reality (AR) devices, laptop, a general-purpose computer, desktop, personal digital assistant, tablet computer, mainframe computer, or any other computing device.
  • VR Virtual Reality
  • AR Augmented Reality
  • the network 106 includes, by way of example but not limitation, one or more of a wireless network, a wired network, an internet, an intranet, a public network, a private network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a Public-Switched Telephone Network (PSTN), a cable network, a cellular network, a satellite network, a fiber optic network, or some combination thereof.
  • PSTN Public-Switched Telephone Network
  • the network 106 may include, but is not limited to, a Third Generation (3G), a Fourth Generation (4G), a Fifth Generation (5G), a Sixth Generation (6G), a New Radio (NR), a Narrow Band Internet of Things (NB-IoT), an Open Radio Access Network (O-RAN), and the like.
  • the network 106 may also include, by way of example but not limitation, at least a portion of one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, waves, voltage or current levels, some combination thereof, or so forth.
  • the environment 100 includes the server 104 accessible via the network 106.
  • the server 104 may include by way of example but not limitation, one or more of a standalone server, a server blade, a server rack, a bank of servers, a server farm, hardware supporting a part of a cloud service or system, a home server, hardware running a virtualized server, a processor executing code to function as a server, one or more machines performing server-side functionality as described herein, at least a portion of any of the above, some combination thereof.
  • the entity may include, but is not limited to, a vendor, a network operator, a company, an organization, a university, a lab facility, a business enterprise side, a defense facility side, or any other facility that provides service.
  • the environment 100 further includes the system 108 communicably coupled to the server 104, and the UE 102, via the network 106.
  • the system 108 is adapted to be embedded within the server 104 or is embedded as the individual entity.
  • FIG. 2 is an exemplary block diagram of the system 108 for managing the selection and the execution sequence of one or more Al models 220, according to one or more embodiments of the present invention.
  • the system 108 for the managing the selection and the execution sequence of one or more Al models 220 includes one or more processors 202, a memory 204, a storage unit 206, a plurality of Al models 220 and a User Interface (UI) 222.
  • the one or more processors 202 hereinafter referred to as the processor 202, may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
  • the system 108 may include multiple processors as per the requirement and without deviating from the scope of the present disclosure.
  • the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204.
  • the processor 202 is configured to fetch and execute computer-readable instructions stored in the memory 204 as the memory 204 is communicably connected to the processor 202.
  • the memory 204 is configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the selection and the execution sequence of one or more Al models 220.
  • the memory 204 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
  • the storage unit 206 is configured to store data associated with the plurality of Al models 220.
  • the storage unit 206 is one of, but not limited to, a centralized database, a cloud-based database, a commercial database, an open-source database, a distributed database, an end-user database, a graphical database, a No-Structured Query Language (NoSQL) database, an object-oriented database, a personal database, an in-memory database, a document-based database, a time series database, a wide column database, a key value database, a search database, a cache databases, and so forth.
  • NoSQL No-Structured Query Language
  • the system 108 includes the plurality of Al models 220.
  • the plurality of Al models 220 are systematic procedures or formulas for solving problems or performing tasks which are used to process data, make decisions, and perform various operations.
  • the plurality of Al models 220 which selects suitable logics for the particular tasks are generally an Artificial Intelligence/Machine Leaning (AI/ML) models.
  • AI/ML Artificial Intelligence/Machine Leaning
  • the tasks are related to the machine learning tasks.
  • the model 220 facilitates in solving real-world problems without extensive manual intervention.
  • the plurality of Al models 220 are referred to one or more Al models 220 and can be used interchangeably without limiting scope of the invention.
  • the system 108 includes the UI 222.
  • the UI 222 is included in the UE 102.
  • the UI 222 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like.
  • the UI 222 allows a user to transmit request to the system 108 for performing the task.
  • the user acts as the data source.
  • the user may be at least one of, but not limited to, a network operator.
  • the UI 222 allows the users to select the one or more Al models 220 and arrange the one or more selected Al models 220 in the optimal execution sequence.
  • the UI 222 allows the users to quickly adjust or fine tune the execution sequence of the one or more selected Al models 220 as per the requirements of the user which enhances the systems 108 adaptability.
  • the UI 222 is the embedded within the system 108 or the UI 222 is the embedded within the UE 102.
  • UI 222 of the system 108 and the UI 222 of the UE 102 can be used interchangeably without limiting scope of the invention.
  • the system 108 includes the processor 202 for managing the selection and the execution sequence of one or more Al models 220.
  • the processor 202 includes a receiving unit 208, an analysing unit 210, a generating unit 212, a transmitting unit 214, a processing unit 216, an executing unit 218, a feedback unit 224, and a logging unit 226.
  • the processor 202 is communicably coupled to the one or more components of the system 108 such as the memory 204, the storage unit 206, the plurality of Al models 220 and the UI 222.
  • operations and functionalities of the receiving unit 208, the analysing unit 210, the generating unit 212, the transmitting unit 214, the processing unit 216, the executing unit 218, the feedback unit 224, the logging unit 226, and the one or more components of the system 108 can be used in combination or interchangeably.
  • the receiving unit 208 of the processor 202 is configured to receive request from a user via the UE 102 for performing the task.
  • the user transmits the request from via the UI 222 of the UE 102 for performing the task.
  • the request includes datasets and one or more characteristics corresponding to each of the datasets.
  • the one or more characteristics of the dataset comprises at least one of, but not limited to, size, dimensionality, and datatypes.
  • the task is at least one of, but not limited to, a classification, a regression, and a clustering of the datasets received in the request.
  • the receiving unit 208 receives request from the users by the UE 102 via an interface specifically constructed for the purpose of connectivity between the system and the UE 102.
  • the interface includes at least one of, but not limited to, an Application Programming Interfaces (APIs).
  • APIs are a set of rules and protocols that allow different software applications to communicate with each other.
  • the APIs are essential for integrating different systems, accessing services, and extending functionality.
  • the analysing unit 210 of the processor 202 Upon receiving the request from the UE 102, more particularly from the UI 222, the analysing unit 210 of the processor 202 is configured to analyse the request received from at least one of, the UE 102 to identify at least a type of task to be performed.
  • the type of the task is at least one of, but not limited to, classification, regression, and clustering of the dataset received in the request.
  • the type of task is identified based on the analysis of the one or more characteristics corresponding to each of the dataset of the request.
  • the analysing unit 210 of the assesses the user's input such as the dataset included in the received request to understand the nature of the task.
  • the analysing unit 210 identifies whether at least one of, the classification, the regression, or the clustering task is to be performed.
  • the nature of the target variable refers to the type of processor 202 is trying to predict or estimate in a machine learning task.
  • the nature of the target variable plays a crucial role in determining the type of problem the system 108 is dealing with. Let us consider an example of the problem associated with image processing. Further, let us assume the dataset provided by the user is associated with the image and the images includes different objects like cats, dogs, and cars, with labels. Herein, the objective is to determine what objects are included in the images.
  • the analysing unit 210 checks the target variables and if each image in the dataset is labeled with a category (e.g., "cat,” “dog,” “car”), and there is a requirement of predicting the category of new images, then the analysing unit 218 identifies that the classification task needs to be performed.
  • a category e.g., "cat,” "dog,” “car”
  • the analysing unit 210 identifies that the regression task needs to be performed. In yet another embodiment, for example, if the dataset provided by the user includes similar data in the dataset and the objective is to combine the similar data, then the analysing unit 210 identifies that the clustering task needs to be performed.
  • the generating unit 212 of the processor 202 is configured to generate a list which includes the one or more Al models 220 to perform the identified task based on the analysis of the request.
  • the one or more Al models 220 such as at least one of, but not limited to, a neural networks and a decision trees Al models 220 will be included in the list by the generating unit 212.
  • the one or more Al models 220 such as at least one of, but not limited to, a linear regression and polynomial regression Al models 220 will be included in the list by the generating unit 212.
  • the one or more Al models 220 such as at least one of, but not limited to, a K-means clustering and hierarchical clustering Al models 220 will be included in the list by the generating unit 212.
  • the transmitting unit 214 of the processor 202 is configured to transmit the generated list to at least one of, the UE 102. Thereafter, the UI 222 of the UE 102 generates a visual representation of the list including the one or more Al models 220 utilizing the generating unit 212.
  • the user Upon generating the visual representation of the generated list, the user selects the preferred one or more Al models 220 from the generated list represented on the UI 222 of the UE 102. Upon selection of the one or more Al models 220, the generating unit 212 generates the visual representation of the execution sequence of one or more selected Al models on the UI 222 of the UE 102. Further, the user arranges the execution sequence of the selected one or more Al models 220 on the least one of, the UE 102 and the UI 222.
  • the user to fine tunes one or more parameters related to each of the selected one or more Al models 220 via the UE 102.
  • the UI 222 of the UE 102 incudes a plurality of intuitive controls which are designed to make user interactions seamless and effortless.
  • the intuitive controls are used by the user to fine tune one or more parameters of each of the selected one or more Al models 220.
  • the one or more parameters is at least one of, but not limited to, learning rates, regularization strengths, and batch sizes of each of the selected one or more Al models 220.
  • the learning rate of the selected one or more Al models 220 is a crucial hyperparameter that influences how quickly or slowly the one or more Al models 220 learns during training.
  • the regularization strength is a hyperparameter that controls the amount of regularization applied to each of the selected one or more Al models 220 to prevent overfitting. The overfitting is a common problem in machine learning where the one or more Al models 220 learns to perform very well on the training dataset but fails to generalize effectively to new, unseen dataset.
  • batch size is a crucial hyperparameter which refers to the number of training examples utilized in one iteration of the selected one or more Al models 220 training process. Instead of processing the entire dataset at once, the selected one or more Al models 220 processes smaller subsets (batches) of the dataset.
  • the receiving unit 208 of the processor 202 is configured to receive an input from the user corresponding to selection of the one or more Al models 220 from the generated list and the execution sequence of the selected one or more Al models 220 arranged by the user.
  • the processing unit 216 of the processor 202 Upon receiving the input from the user corresponding to selection of the one or more Al models 220 and the execution sequence of the selected one or more Al models 220, the processing unit 216 of the processor 202 is configured to preprocess the dataset received from request. In one embodiment, the processing unit 216 is configured to perform at least one of, but not limited to, data scaling, encoding, feature selection and normalization of the dataset to ensure the data consistency and quality within the system 108.
  • the data normalization is the process of at least one of, but not limited to, reorganizing the data within the dataset, removing the redundant data within the dataset, formatting the data within the dataset, removing null values from the dataset, handling missing values from the dataset.
  • the main goal of the the processing unit 216 is to achieve a standardized data format across the system 108.
  • the processing unit 216 eliminates duplicate data and inconsistencies which reduces manual efforts.
  • the processing unit 216 ensures that the preprocessed dataset is stored appropriately in at least one of, the storage unit 206 for subsequent retrieval and analysis.
  • the data scaling refers to the process of normalizing or standardizing the range of independent variables (features) in the dataset. When features are on a similar scale, the selected one or more Al models 220 performs better or converge faster.
  • encoding is the process of converting variables into a numerical format that can be used by the selected one or more Al models 220.
  • the feature selection involves choosing a subset of relevant variables for building the one or more Al models 220 which facilitates in improving the performance of the selected one or more Al models 220, reduce overfitting, and decrease computational cost.
  • the executing unit 218 of the processor 202 is configured to chaining the one or more selected Al models 220 together which ensures that the dataset flows seamlessly from one Al model 220 to the next Al model 220 in the specified execution sequence.
  • the executing unit 218 includes an algorithmic framework that manages the sequencing of the selected one or more Al models 220.
  • the algorithmic framework is structured approach that provides a set of guidelines, tools, for designing, implementing, and managing the selected one or more Al models 220 within a specific context, such as machine learning, optimization, or data processing.
  • the executing unit 218 links the one or more selected Al models 220 in a sequential manner to form a pipeline of the one or more selected Al models 220.
  • the executing unit 218 of the processor 202 is configured to execute each of the one or more selected Al models 220 in the determined execution sequence utilizing the dataset.
  • the dataset is fed to a first Al model in the execution sequence of the one or more selected Al models 220.
  • the first Al model processes the dataset and produces an output.
  • the executing unit 218 provides the produced output to the next Al model present in the execution sequence of the one or more selected Al models 220.
  • the output of each of the one or more selected Al models 220 is an input for a subsequent Al model of the one or more selected Al models 220. This process continues iteratively until the last Al model in the execution sequence is reached.
  • the one or more selected Al models 220 processes the dataset.
  • the one or more selected Al models 220 are trained on historical data associated with the previously executed task. Based on training, the one or more selected Al models 220 processes the dataset.
  • the executing unit 218 executes each of the one or more selected Al models 220 iteratively. While iteratively processing the one or more selected Al models 220, when the last Al model in the execution sequence of the one or more selected Al models 220 produces output, the output produced by the last Al model in the execution sequence is inferred as a final output. In particular, the final output is used for further analysis or application within the network 106.
  • the feedback unit 224 of the processor 202 is configured to provide feedback to the user via the UE 102.
  • the feedback include information related to the performance of each of the selected one or more Al models 220.
  • the final output is provided in the feedback to the user.
  • the user analyses the performance of each of the selected one or more Al models 220 on the UI 222.
  • the user analyses the performance of each of the selected one or more Al models 220 by comparing performance metrics of the each of the selected one or more Al models 220 with a predefined set of performance metrics.
  • the predefined set of performance metrics are at least one of, but not limited to, an accuracy, a loss, and a convergence rate.
  • the predefined set of performance metrics are defined by the user based on the historical data related to the tasks.
  • the user based on comparison when the user determines that the performance metrics of the each of the selected one or more Al models 220 are similar or within a range of the predefined set of performance metrics, then the user infers that performance of the one or more selected Al models 220 is suitable to perform the task utilizing the dataset. Thereafter, the user stores the one or more selected Al models 220 along with the execution sequence in the storage unit 206.
  • the user upon receiving the feedback from the feedback unit 224, analyses the performance of each of the selected one or more Al models 220. In one embodiment, based on comparison when the user determines that the performance metrics of the each of the selected one or more Al models 220 not similar or within a range of the predefined set of performance metrics, then the user infers that performance of the one or more selected Al models 220 is not suitable to perform the task utilizing the dataset. Then the user transmits a modification input to the processor 202 via one of the UE 102 and the UI 222. In particular, the receiving unit 228 of the processor 202 is configured to receive the modification input.
  • the modification input corresponds to modification of one of the execution sequence and one or more parameters of each of the one or more selected Al models 220 via one of the UE 102 and the UI 222.
  • the user transmits the modification input to the system 108 in order to at least one of, but not limited to, select different one or more one or more Al models 220 and one or more one or more Al models 220 one or more one or more Al models 220 so that the selected one or more Al models 220 are suitable for performing the task utilizing the dataset provided by the user.
  • the logging unit 226 of the processor 202 is configured to store logs pertaining to at least one of, but not limited to, selection of the one or more Al models 220, the output produced by each of the one or more selected Al models 220 and the performance metrics of the one or more selected Al models 220 in the storage unit 206.
  • the logs facilitate at least one of, but not limited to, monitoring, analysing of system behaviour and performance of the system over time.
  • the logs pertaining to at least one of, but not limited to, the selection of the one or more Al models 220, the output produced by each of the one or more selected Al models 220 and the performance metrics of the one or more selected Al models 220 are notified to the user in real time.
  • the accuracy and efficiency in complex tasks involving plurality of Al models 220 is increased due to which the overall system 108 performance is increased.
  • the receiving unit 208, the analysing unit 210, the generating unit 212, the transmitting unit 214, the processing unit 216, the executing unit 218, the feedback unit 224, and the logging unit 226 in an exemplary embodiment, are implemented as a combination of hardware and programming (for example, programmable instructions) to implement one or more functionalities of the processor 202.
  • programming for the processor 202 may be processor-executable instructions stored on a non-transitory machine-readable storage medium and the hardware for the processor may comprise a processing resource (for example, one or more processors), to execute such instructions.
  • FIG. 3 illustrates an exemplary architecture for the system 108, according to one or more embodiments of the present invention. More specifically, FIG. 3 illustrates the system 108 for managing the selection and the execution sequence of one or more Al models 220. It is to be noted that the embodiment with respect to FIG. 3 will be explained with respect to the UE 102 for the purpose of description and illustration and should nowhere be construed as limited to the scope of the present disclosure.
  • FIG. 3 shows communication between the UE 102, and the system 108.
  • the UE 102 uses network protocol connection to communicate with the system 108.
  • the network protocol connection is the establishment and management of communication between the UE 102, and the system 108 over the network 106 (as shown in FIG. 1) using a specific protocol or set of protocols.
  • the network protocol connection includes, but not limited to, Session Initiation Protocol (SIP), System Information Block (SIB) protocol, Transmission Control Protocol (TCP), User Datagram Protocol (UDP), File Transfer Protocol (FTP), Hypertext Transfer Protocol (HTTP), Simple Network Management Protocol (SNMP), Internet Control Message Protocol (ICMP), Hypertext Transfer Protocol Secure (HTTPS) and Terminal Network (TELNET).
  • SIP Session Initiation Protocol
  • SIB System Information Block
  • TCP Transmission Control Protocol
  • UDP User Datagram Protocol
  • FTP File Transfer Protocol
  • HTTP Hypertext Transfer Protocol
  • SNMP Simple Network Management Protocol
  • ICMP Internet Control Message Protocol
  • HTTPS Hypertext Transfer Protocol Secure
  • TELNET Terminal Network
  • the UE 102 includes a primary processor 302, and a memory 304 and the UI 222.
  • the UE 102 may include more than one primary processor 302 as per the requirement of the network 106.
  • the primary processor 302 may be implemented as one or more microprocessors, microcomputers, microcontrollers, digital signal processors, central processing units, state machines, logic circuitries, single board computers, and/or any devices that manipulate signals based on operational instructions.
  • the primary processor 302 is configured to fetch and execute computer-readable instructions stored in the memory 304.
  • the memory 304 may be configured to store one or more computer-readable instructions or routines in a non-transitory computer-readable storage medium, which may be fetched and executed for managing the selection and the execution sequence of the one or more Al models 220.
  • the memory 304 may include any non-transitory storage device including, for example, volatile memory such as RAM, or non-volatile memory such as disk memory, EPROMs, FLASH memory, unalterable memory, and the like.
  • the UI 222 includes a variety of interfaces, for example, a graphical user interface, a web user interface, a Command Line Interface (CLI), and the like.
  • the UI 222 of the UE 102 allows the user to select the one or more Al models 220 and arrange the one or more selected Al models 220 in the optimal execution sequence.
  • the UI 222 is included in the UE 102.
  • the UI 222 is further configured to provides the visual representation of the list including the one or more Al models 220 to the user.
  • the UI 222 also provides the visual representation of the execution sequence of one or more selected Al models 220 to the user.
  • the system 108 includes the processors 202, and the memory 204, for managing the selection and the execution sequence of the one or more Al models 220, which are already explained in FIG. 2.
  • the processors 202 and the memory 204, for managing the selection and the execution sequence of the one or more Al models 220, which are already explained in FIG. 2.
  • the memory 204 for managing the selection and the execution sequence of the one or more Al models 220, which are already explained in FIG. 2.
  • a similar description related to the working and operation of the system 108 as illustrated in FIG. 2 has been omitted to avoid repetition.
  • FIG. 4 is an exemplary the system 108 architecture 400 for managing the selection and the execution sequence of one or more Al models 220, according to one or more embodiments of the present disclosure.
  • the architecture 400 includes the UI 222, an Integrated Performance Management (IPM) 402, the processor 202, the storage unit 206, and a workflow manager 404 communicably coupled to each other via the network 106.
  • IPM Integrated Performance Management
  • the UI 222 enables the users transmit the dataset to perform the task. Further, the UI 222 enables the users to select the one or more Al models 220 from the list of and arrange the selected one or more Al models 220 in the execution sequence within the UI 222. The UI 222 provides visual representations of the selected one or more Al models 220 along with the execution sequence of the selected one or more Al models 220 in order to review and adjust the execution sequence by the user. Utilizing the UI 222 the user fine tunes the one or more parameters of each of the selected one or more Al models 220 using the intuitive controls in the UI 222.
  • the Integrated Performance Management (IPM) 402 refers to the systematic approach to managing and enhancing performance using various one or more Al models 220 and methodologies. This integration helps organizations align their strategies with operational execution and improve decisionmaking.
  • the system 108 upon selection of the one or more Al models 220 and the execution sequence.
  • the system 108 is configured to feed the dataset to the first Al model from execution sequence of the selected one or more Al models 220, then the first Al model produces the output which is feed to the next Al model as the input.
  • the system 108 continues the process iteratively until the last Al model in the execution sequence is reached. When the last Al model processes the received data, the final output is produced which is represented as the system 108 result or prediction.
  • the logs related to the final output and the execution sequence of the one or more Al models 220 is stored in the storage unit 206.
  • the workflow manager 412 extracts the information related to the final output and the execution sequence of the one or more Al models 220 and provides the information as the feedback to the user via the UI 222.
  • the workflow manager 404 is a tool or system 108 designed to streamline, coordinate, and automate tasks and processes within an organization.
  • the workflow manager 412 facilities in managing complex workflows by defining, monitoring, and optimizing the flow of work from one step to another.
  • the user provides the modification input to the system 108 utilizing the UI 222 in which the user had fine-tuned the one or more parameters of each of the selected one or more Al models 220 and changed the execution sequence of the selected one or more Al models 220.
  • FIG. 5 is a signal flow diagram illustrating the flow for managing the selection and the execution sequence of one or more Al models 220, according to one or more embodiments of the present disclosure.
  • the system 108 receives the request from the user for executing the tasks.
  • the request includes the dataset.
  • the system 108 identifies the task to be performed based on identifying the characteristics of the dataset included in the request.
  • the task is at least one of, but not limited to, the classification, the regression, and the clustering of the datasets received in the request. Identifying tasks based on the characteristics of a dataset involves analyzing the dataset to determine the task to be performed.
  • the system 108 generates the list of the one or more Al models based on the identified tasks to be performed. For example, if the classification task is to be performed then, the system 108 generates the list such as at least one of, but not limited to, the neural networks and the decision trees Al models 220. Further, the system 108 displays the generated list of the one or more Al models 220 on the UI 222 of the UE 102.
  • the user selects the one or more Al models 220 from the generated list which is displayed on the UI 222 and the user arranges the execution sequence of the one or more Al models 220 via the UI 222.
  • the system 108 executes the selected one or more Al models 220 in the arranged execution sequence utilizing the dataset. For example, the received data is fed to the first Al model which is processed, and the output is produced. The output generated by the first Al model is fed to the next Al model. The execution continues iteratively until the last Al model in the sequence is reached. When the last Al model produces the output, that output is considered as the final output of the system 108.
  • the final output of the one or more Al models 220 and information related to the performance of each of the selected one or more Al models 220 is provided as the feedback to the user. Further, the user analyses the performance of each of the selected one or more Al models 220 and the user transmits the modification input so as to modify the selection and the execution sequence of the one or more Al models.
  • FIG. 6 is a flow diagram of a method 600 for managing the selection and the execution sequence of one or more Al models 220, according to one or more embodiments of the present invention.
  • the method 600 is described with the embodiments as illustrated in FIG. 2 and should nowhere be construed as limiting the scope of the present disclosure.
  • the method 600 includes the step of analysing the request received from the user to identify at least the type of task to be performed.
  • the receiving unit 208 is configured to receive request from the user for performing the task.
  • the request includes the dataset provided by the user.
  • the request is a Hypertext Transfer Protocol version 2 (HTTP2) request.
  • the analysing unit 210 is configured to analyse the request received from the user to identify at least a type of task to be performed.
  • the analysing unit 210 identifies the characteristics of the dataset and based on the identifies characteristics, the analysing unit 210 is identify the type of task to be performed.
  • the analysing unit 210 checks the size and the target variables in the dataset, if the dataset includes the numerical values, them the analysing unit 210 identifies that the regression task needs to be performed.
  • the method 600 includes the step of generating the list comprising the one or more Al models 220 to perform the task based on the analysis of the request.
  • the generating unit 212 is configured to generate the list which includes the one or more Al models 220 to perform the identified task based on the analysis of the request. For example, based on analysis of the request, if the analysing unit 210 had identified that the regression task needs to be performed, then the generating unit 212 generates the list including the one or more Al models 220 such as at least one of, but not limited to, a linear regression and polynomial regression Al models 220 which are able to perform the regression task.
  • the method 600 includes the step of receiving, an input from the user corresponding to selection of the one or more Al models 220 from the generated list and the execution sequence of the selected one or more Al models 220.
  • the receiving unit 208 is configured to receive the input from the user corresponding to the selection of the one or more Al models 220 from the generated list and the execution sequence of the selected one or more Al models 220.
  • the generated list is displayed on the UI 222 of the UE 102.
  • the user selects the preferred one or more Al models 220 from the generated list which is displayed on the UI 222 and then the user arranges the execution sequence of one or more selected Al models 220. For example, let us consider there are 10 Al models which are displayed on the UI 222 as the generated list. Based on the user preference, the user selects at least 5 Al models to perform the task and then the user arranges the execution sequence of the 5 Al models.
  • the method 600 includes the step of providing feedback corresponding to the selection of the one or more Al models 220 and the execution sequence of the one or more Al models 220 so as to modify the selection and the execution sequence of the Al models 220.
  • the feedback unit 224 is configured provide the feedback corresponding to the selection of the one or more Al models 220 and the execution sequence of the one or more Al models 220.
  • the processing unit 216 preprocess the dataset received from request and then the executing unit 218 chains the one or more selected Al models 220 together. Further, the executing unit 218 executes each of the one or more selected Al models 220 in the determined execution sequence using the dataset. For example, let us assume that there 5 Al models selected for the task with the determined execution sequence such as a Al model 1, a Al model 2,... , a Al model 5. The dataset is fed to the Al model 1 which generates the output based on the fed dataset. Thereafter, the output generated by the Al model 1 is fed to the Al model 2 as the input. Based on the fed input the Al model 2 generates the output. This process continues iteratively until the Al model 5 in the sequence is reached. The output generated by the last Al model which id Al model 5 is considered as the final output.
  • the feedback unit 224 is configured to provide feedback to the user,
  • the feedback includes the, the selection of the one or more Al models 220, the execution sequence of the one or more Al models 220, and the final output.
  • the user Based on the performance metrics of the each of the selected one or more Al models 220, the user transmits the modification input to modify at least one of, the selection of the one or more Al models 220 and execution sequence of the one or more Al models 220.
  • the UI 222 provides the user a transparent and intuitive way for the selection of the one or more Al models 220 and sequencing of the one or more selected Al models 220 which leads to better decision making and potentially higher one or more Al models 220 performance.
  • a non -transitory computer-readable medium having stored thereon computer-readable instructions that, when executed by a processor 202.
  • the processor 202 is configured to analyse the request received from at least one of the user to identify at least the type of task to be performed.
  • the processor 202 is further configured to generate the list comprising the one or more Al models 220 to perform the task based on the analysis of the request.
  • the processor 202 is further configured to receive, an input from the user corresponding to selection of one or more Al models 220 from the generated list and an execution sequence of the selected one or more Al models 220.
  • the processor 202 is further configured to provide feedback corresponding to the selection of the one or more Al models 220 and the execution sequence of the one or more Al models 220 so as to modify the selection and the execution sequence of the one or more Al models 220.
  • FIG.1 -6 A person of ordinary skill in the art will readily ascertain that the illustrated embodiments and steps in description and drawings (FIG.1 -6) are set out to explain the exemplary embodiments shown, and it should be anticipated that ongoing technological development will change the manner in which particular functions are performed. These examples are presented herein for purposes of illustration, and not limitation. Further, the boundaries of the functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternative boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed. Alternatives (including equivalents, extensions, variations, deviations, etc., of those described herein) will be apparent to persons 1 skilled in the relevant art(s) based on the teachings contained herein. Such alternatives fall within the scope and spirit of the disclosed embodiments.
  • the present disclosure provides technical advancements of customization and flexibility to users to tailor machine learning workflows based on the users specific needs and datasets, allowing for more precise control over model training and data processing.
  • the invention improvs decision making by providing a user interactive interface to users with a transparent and intuitive way to make informed choices about the sequence of models, leading to better decision-making and potentially higher model performance.
  • the present invention enables quick adjustments to models sequences in response to changing data or evolving project requirements, enhancing the system's adaptability.
  • the invention empowers users with varying levels of machine learning expertise to actively participate in the model development process, democratizing machine learning capabilities.
  • the present invention offers multiple advantages over the prior art and the above listed are a few examples to emphasize on some of the advantageous features.
  • the listed advantages are to be read in a non-limiting manner.
  • UE User Equipment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Medical Informatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Debugging And Monitoring (AREA)

Abstract

La présente invention concerne un système (108) et un procédé (600) de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles (220) d'intelligence artificielle (IA). Le procédé (600) comprend une étape d'analyse d'une requête reçue en provenance d'un utilisateur pour identifier au moins un type de tâche à effectuer. Ensuite, le procédé comprend la génération d'une liste comprenant le ou les modèles d'IA (220) pour effectuer la tâche sur la base de l'analyse de la requête. En outre, le procédé comprend la réception d'une entrée en provenance de l'utilisateur correspondant à la sélection du ou des modèles d'IA (220) à partir de la liste générée et d'une séquence d'exécution du ou des modèles d'IA sélectionnés (220). Le procédé (600) comprend l'étape de fourniture d'une rétroaction correspondant à la sélection du ou des modèles d'IA (220) et de la séquence d'exécution du ou des modèles d'IA (220) de façon à modifier la sélection et la séquence d'exécution du ou des modèles d'IA (220).
PCT/IN2024/052014 2023-10-07 2024-10-07 Système et procédé de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles d'intelligence artificielle (ia) Pending WO2025074448A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202321067390 2023-10-07
IN202321067390 2023-10-07

Publications (1)

Publication Number Publication Date
WO2025074448A1 true WO2025074448A1 (fr) 2025-04-10

Family

ID=95282846

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IN2024/052014 Pending WO2025074448A1 (fr) 2023-10-07 2024-10-07 Système et procédé de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles d'intelligence artificielle (ia)

Country Status (1)

Country Link
WO (1) WO2025074448A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200372307A1 (en) * 2019-05-22 2020-11-26 Adobe Inc. Model insights framework for providing insight based on model evaluations to optimize machine learning models
CN114154406A (zh) * 2021-11-22 2022-03-08 厦门深度赋智科技有限公司 基于黑盒优化器的ai模型自动建模系统

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200372307A1 (en) * 2019-05-22 2020-11-26 Adobe Inc. Model insights framework for providing insight based on model evaluations to optimize machine learning models
CN114154406A (zh) * 2021-11-22 2022-03-08 厦门深度赋智科技有限公司 基于黑盒优化器的ai模型自动建模系统

Similar Documents

Publication Publication Date Title
US20230316112A1 (en) Computer-based systems configured for detecting, classifying, and visualizing events in large-scale, multivariate and multidimensional datasets and methods of use thereof
US11416754B1 (en) Automated cloud data and technology solution delivery using machine learning and artificial intelligence modeling
US10360517B2 (en) Distributed hyperparameter tuning system for machine learning
US10860950B2 (en) Automated computer-based model development, deployment, and management
US11481553B1 (en) Intelligent knowledge management-driven decision making model
US20210192394A1 (en) Self-optimizing labeling platform
JP7591128B2 (ja) 自動化されたデータサイエンスプロセスのためのシステム及び方法
US20240346338A1 (en) Model feature analysis and clustering tools for refining outputs of machine learning models
WO2022188994A1 (fr) Procédés mis en œuvre par ordinateur se rapportant à un processus industriel destiné à la fabrication d'un produit et système permettant d'effectuer lesdits procédés
US11748248B1 (en) Scalable systems and methods for discovering and documenting user expectations
US12032918B1 (en) Agent based methods for discovering and documenting user expectations
JP2024516656A (ja) 産業特定機械学習アプリケーション
EP4205043A1 (fr) Apprentissage machine hybride
CN113641525A (zh) 变量异常修复方法、设备、介质及计算机程序产品
US20230117893A1 (en) Machine learning techniques for environmental discovery, environmental validation, and automated knowledge repository generation
US20230196204A1 (en) Agnostic machine learning inference
WO2025074448A1 (fr) Système et procédé de gestion de sélection et de séquence d'exécution d'un ou plusieurs modèles d'intelligence artificielle (ia)
JP2022549407A (ja) レジーム・シフトの識別及び分析のための方法及びシステム
US11803464B2 (en) System for automatic identification and selection of optimization metrics and accompanying models in experimentation platforms
US20230196203A1 (en) Agnostic machine learning training integrations
US12361096B1 (en) Intelligent data integration system
WO2025074442A1 (fr) Système et procédé de raffinement d'une sélection et d'un séquençage de modèle d'intelligence artificielle (ia)
US20250315683A1 (en) Analysis of structured data in chains of repeatable actions within an artificial intelligence-based agent environment
US20250299146A1 (en) Method for comparing task completion efficiency between worker types
US20240211284A1 (en) Full life cycle data science environment graphical interfaces

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24874272

Country of ref document: EP

Kind code of ref document: A1