WO2025075613A1 - Modifying self-efficacy in artificial intelligence nodes - Google Patents
Modifying self-efficacy in artificial intelligence nodes Download PDFInfo
- Publication number
- WO2025075613A1 WO2025075613A1 PCT/US2023/034436 US2023034436W WO2025075613A1 WO 2025075613 A1 WO2025075613 A1 WO 2025075613A1 US 2023034436 W US2023034436 W US 2023034436W WO 2025075613 A1 WO2025075613 A1 WO 2025075613A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- self
- nodes
- efficacy
- group
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0985—Hyperparameter optimisation; Meta-learning; Learning-to-learn
Definitions
- the subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate artificial intelligence (Al), including software-configured computerized variants of such specialpurpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other special-purpose machines that facilitate Al.
- Al artificial intelligence
- the present disclosure addresses systems and methods to facilitate modifying self-efficacy in one or more nodes of an Al (e.g., a machine learning model).
- machine learning models implement one or more algorithms or other techniques for performance on computer hardware and are configured to learn from experiences processing data and make decisions (e.g., predictions, inferences, or categorizations) without explicit programming.
- Machine learning models may utilize one or more statistical methods, mathematical optimization, pattern recognition techniques, or any suitable combination thereof, to identify patterns or other relationships within data.
- Machine learning models are now widely used across various domains, such as image recognition, speech recognition, natural language processing, classifiers, recommendation generators, and anomaly detectors.
- the model is trained using historical information that is labeled with one or more features, and after training, the trained machine learning model is provided with new unlabeled input data to generate inferences about the new data.
- FIG. l is a network diagram illustrating a network environment suitable for modifying self-efficacy in an Al node, according to some example embodiments.
- FIG. 2 is a block diagram illustrating components of an Al machine suitable for modifying self-efficacy in an Al node, according to some example embodiments.
- FIGS. 3-8 are flowcharts illustrating operations of the Al machine in performing a method of modifying self-efficacy in an Al node, according to some example embodiments.
- FIG. 9 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
- Example methods facilitate modifying self- efficacy in an Al node (e.g., a node in a machine learning model), and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate modifying self-efficacy in an Al node.
- Al node e.g., a node in a machine learning model
- example systems e.g., special-purpose machines configured by special-purpose software
- structures e.g., structural components, such as modules
- operations e.g., in a procedure, algorithm, or other function
- a certain type of machine learning model (e.g., as an example of an Al engine or other Al model) can be configured to add to itself one or more selfevolving nodes (e.g., within a group of such nodes in a node network, such as within a layer of the node network), delete from itself one or more self-evolving nodes (e.g., within the group), or any suitable combination thereof, in response to changes in input data processed by the machine learning model (e.g., specifically by the group of nodes).
- Some illustrative examples of such machine learning models configured to resize a group of nodes are provided in International Application No.
- the “self-efficacy” of a node represents a degree (e.g., extent) to which the node’s own outputs (e.g., in performing data processing operations in accordance with a function of the node or its group of nodes) are observed by that same node to control inputs received later by that same node. That is, a node’s self-efficacy indicates how much the node’s output affects (e.g., predicts congruence or predicts incongruence) subsequent inputs to that node (e.g., from other nodes).
- the node’s self-efficacy may be high if the node’s outputs are strongly correlated with later-received inputs to that node (e.g., the node’s outputs always or very often reappear later as inputs to that node).
- the node’s self-efficacy may also be high if the opposite of the node’s outputs are strongly correlated with later-received inputs to that node (e.g., the opposites of the node’s outputs always or very often reappear later as inputs to that node).
- the node’s self-efficacy may low where the node’s outputs are weakly correlated, or not correlated at all, with later-received inputs to that node (e.g., the node’s outputs appear to have little to no effect on inputs received later by the node).
- a node may be configured to receive direct input (e.g., specifically to be processed by the node as part of performing the function of the group of nodes) from one or more sources (e.g., one or more nodes inside or outside the group of nodes), and additionally, that same node may be configured to receive observational input (e.g., informational for the node or otherwise not specifically to be processed as part of performing the function of the group) from one or more other nodes in its group (e.g., peer nodes, child nodes, or any suitable combination thereof) regarding the outputs of those one or more other nodes in the group.
- direct input e.g., specifically to be processed by the node as part of performing the function of the group of nodes
- sources e.g., one or more nodes inside or outside the group of nodes
- observational input e.g., informational for the node or otherwise not specifically to be processed as part of performing the function of the group
- other nodes in its group e.g., peer no
- the self-efficacy of a node is determined based on one or more observational inputs only, without influence from any direct inputs. In some other example implementations, the self-efficacy of a node is determined based on one or more direct inputs only, without influence from any observational inputs. In some further example implementations, the self- efficacy of a node is determined based on a combination of one or more direct inputs and one or more observational inputs.
- the self-efficacy of a node may be a value that represents a likelihood (e.g., absolute or relative) that the node’s output will be affirmed (e.g., matched or equaled, exactly or approximately) or negated (e.g., inverted or opposed, exactly or approximately) by one or more other nodes in the same group of nodes.
- a likelihood e.g., absolute or relative
- negated e.g., inverted or opposed, exactly or approximately
- the degree to which the node’s output is affirmed or negated may range from a high degree (e.g., corresponding to high matching, high similarity, or otherwise high congruence with other nodes in the group, or to high opposition, high dissimilarity, or otherwise high incongruence with other nodes in the group) to a low degree (e.g., corresponding to little matching, little similarity, or otherwise little congruence with other nodes in the group, as well as little opposition, little dissimilarity, or otherwise little incongruence with other nodes I the group).
- a machine learning model may be configured to delete (e.g., prune) any node whose self-efficacy falls below a threshold self- efficacy (e.g., a predetermined threshold self-efficacy value), which may indicate that the node’s output insufficiently affects or predicts its later inputs, which may be or include outputs of other nodes in the group (e.g., performing the same function).
- a threshold self- efficacy e.g., a predetermined threshold self-efficacy value
- the threshold self- efficacy may be expressed as a proportion relative to other nodes in the group, such as a certain quartile (e.g., deleting nodes with the lowest 25% self-efficacy) or a certain decile (e.g., deleting nodes with the lowest 10% or 20% self- efficacy).
- the threshold self-efficacy is expressed as an absolute numerical value (e.g., deleting nodes with self- efficacies at or below a certain floating point number, such as 0.05 or 0.125).
- a node may be manipulated (e.g., externally influenced) to modify (e.g., adjust, update, or otherwise alter) its self-efficacy, accurately or even inaccurately, thus causing (e.g., triggering) the node to configure (e.g., reconfigure) itself to differently (e.g., better) perform the function of its group of nodes.
- modify e.g., adjust, update, or otherwise alter
- the node e.g., reconfigure
- such modification e.g., adjustment, altering, or manipulation
- a node’s self- efficacy may have the effect of prolonging the node’s presence and contributions to performance of the function within its group of nodes, which may include providing more time and opportunity for the node to self-evolve towards stronger alignment with, or stronger opposition to, its peer nodes in performing that function.
- a machine may modify the self- efficacy of a node by first accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes.
- the machine learning model is configured to monitor the self-efficacies of nodes in the group and delete one or more nodes from the group based on (e.g., in response to) any of their self-efficacies falling below a threshold self-efficacy.
- a first node is configured to determine its self-efficacy by determining a degree to which one or more of its outputs predict one or more of its inputs (e.g., direct, observational, or both).
- the machine then provides, to the first node, a signal that indicates that the degree to which the first node’s outputs affect the first node’s inputs is increased.
- the provided signal causes the first node to determine that its self-efficacy is increased, and the first node accordingly configures itself to differently (e.g., better) perform the function of the group, based on the increased self-efficacy of the first node.
- the machine learning model forbears from deleting the first node (e.g., decides not to delete the first node, which otherwise would have been deleted due to insufficient self-efficacy), in response to the increased self-efficacy of the first node.
- the forbearance of the machine learning model from deleting the node may endure until the next time that node’s self-efficacy falls below the threshold self-efficacy.
- the first node may have its presence in the group extended by the provided signal (e.g., prolonging its “life” within the group of nodes), thus obtaining more time for the first node to self-evolve toward better performance of the function of the group of nodes, compared to what otherwise would have occurred without providing the signal to modify the first node’s self-efficacy.
- FIG. 1 is a network diagram illustrating a network environment 100 suitable for modifying (e.g., adjusting) self-efficacy in an Al node, according to some example embodiments.
- the network environment 100 includes an Al machine 110, a database 115, and devices 130 and 150, all communicatively coupled to each other via a network 190.
- the Al machine 110 with or without the database 115, may form all or part of a cloud 118 (e.g., a geographically distributed set of multiple machines configured to function as a single server), which may form all or part of a network-based system 105 (e.g., a cloud-based server system configured to provide one or more network-based services to the devices 130 and 150).
- the Al machine 110 and the devices 130 and 150 may each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as described below with respect to FIG. 9.
- users 132 and 152 are users 132 and 152.
- One or both of the users 132 and 152 may be a human user (e.g., a human being), a machine user (e.g., a computer configured by a software program to interact with the device 130 or 150), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human).
- the user 132 is associated with the device 130 and may be a user of the device 130.
- a special-purpose computer that has been specially modified (e.g., configured by special-purpose software) by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines.
- the network 190 may be any network that enables communication between or among systems, machines, databases, and devices (e.g., between the machine 110 and the device 130). Accordingly, the network 190 may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network 190 may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof.
- transmission medium refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software.
- FIG. 2 is a block diagram illustrating components of the Al machine 110, according to some example embodiments.
- the Al machine 110 is shown as including a machine learning model 210 and a device interface 250, configured to communicate with each other (e.g., via a bus, shared memory, or a switch).
- the machine learning model 210 includes a group 212 of nodes, and the group 212 includes multiple nodes, such as a node 221 (e.g., a first node) and a node 222 (e.g., a second node), which may be a peer of the node 221, a parent of the node 221, a child of the node 221, or a node of some other relationship with the node 221 within the group 212 of nodes.
- a node 221 e.g., a first node
- a node 222 e.g., a second node
- the device interface 250 is configured to interact with one or more devices (e.g., device 130, device 150, or both).
- app 200 may be configured to cause the device 130 to send information (e.g., a signal, such as a feedback signal) to the machine learning model 210 (e.g., to the node 221 within the group 212 of nodes), by causing the device interface 250 to send a suitably configured command, request, or other instruction to the device 130 for causing the device 130 to send that information to the machine learning model 210.
- information e.g., a signal, such as a feedback signal
- the machine learning model 210 e.g., to the node 221 within the group 212 of nodes
- Provision of information e.g., a signal
- a node e.g., node 221
- Provision of information may be performed in a memory via software (e.g., one or more software instructions) executing on one or more processors (e.g., processors 299).
- the machine learning model 210 and the device interface 250 may form all or part of an app 200 (e.g., a mobile app, a server app, or other computer program) that is stored (e.g., installed) on the Al machine 110 (e.g., responsive to or otherwise as a result of data being received via the network 190, such as from the database 115 or from the device 130).
- an app 200 e.g., a mobile app, a server app, or other computer program
- the Al machine 110 e.g., responsive to or otherwise as a result of data being received via the network 190, such as from the database 115 or from the device 130.
- processors 299 e.g., hardware processors, digital processors, or any suitable combination thereof
- processors 299 may be included (e.g., temporarily or permanently) in the app 200, the machine learning model 210, the device interface 250, or any suitable combination thereof.
- Each component (e.g., module) described herein is an example of a means for performing the operations described herein for that component.
- any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components.
- components described herein as being implemented within a single system or machine e.g., a single device
- may be distributed across multiple systems or machines e.g., multiple devices.
- FIG. 3-8 are flowcharts illustrating operations of the Al machine 110 in performing a method 300 of modifying self-efficacy in an Al node, such as the node 221 (e.g., a first node in the group 212 of nodes), according to some example embodiments.
- Operations in the method 300 may be performed by the Al machine 110, using components (e.g., modules) described above with respect to FIG. 2, using one or more processors (e.g., microprocessors or other hardware processors), or using any suitable combination thereof.
- the method 300 includes operations 310, 320, 330, and 340.
- the app 200 accesses the machine learning model 210 that includes the group 212 of nodes, which is configured to perform a function of the group 212 of nodes.
- the machine learning model 210, the group 212 of nodes may be configured to update the function of the group 212 of nodes based on self-efficacies (e.g., monitored by the machine learning model 210, the app 200, or any suitable combination thereof) of the nodes (e.g., node 221, node 222, or both) in the group 212 of nodes.
- the machine learning model 210 may be configured to monitor the self-efficacies of nodes (e.g., nodes 221 and 222) in the group 212 of nodes and to delete one or more nodes from the group 212 of nodes based on (e.g., in response to) their self-efficacies falling below a threshold self-efficacy.
- nodes e.g., nodes 221 and 222
- the machine learning model 210 may be configured to monitor the self-efficacies of nodes (e.g., nodes 221 and 222) in the group 212 of nodes and to delete one or more nodes from the group 212 of nodes based on (e.g., in response to) their self-efficacies falling below a threshold self-efficacy.
- a first node (e.g., node 221) among the group 212 of nodes is configured to determine its self-efficacy by determining a degree (e.g., extent) to which outputs of the first node affect inputs to the first node (e.g., as direct inputs for first node to perform the function of the group 212 of nodes, as observational inputs of outputs from other nodes in performing the function of the group 212, or any suitable combination thereof).
- the app 200 provides, to the first node (e.g., node 221), a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
- the providing of the signal to the first node (e.g., node 221) occurs at an elapsed time since the self-efficacy of the first node (e.g., node 221) was last monitored (e.g., by the machine learning model 210, by the app 200, or any suitable combination thereof). Further details of various example embodiments are discussed below (e.g., with respect to FIGS. 3-8).
- the provided signal causes the first node (e.g., node 221) to determine that its self-efficacy is increased. Accordingly, the first node (e.g., node 221) configures (e.g., reconfigures) itself to differently (e.g., better) perform the function of the group 212 of nodes based on the increased self-efficacy of the first node (e.g., node 221).
- the first node configures itself to differently (e.g., better) perform the function of the group 212 of nodes based on the elapsed time since the self-efficacy of the first node (e.g., node 221) was last monitored (by the machine learning model 210, by the app 200, or any suitable combination thereof).
- the machine learning model 210 forbears (e.g., refrains) from deleting the first node (e.g., node 221) based on (e.g., in response to) the increased self-efficacy of the first node (e.g., node 221). For example, influenced by the increased self-efficacy of the first node (e.g., node 221), the machine learning model 210 may automatically decide (e.g., choose) to refrain from deleting the first node (e.g., at a time or otherwise during a condition in which the machine learning model 210 otherwise would have decided to delete the first node due to insufficient self-efficacy).
- the machine learning model 210 may automatically decide (e.g., choose) to refrain from deleting the first node (e.g., at a time or otherwise during a condition in which the machine learning model 210 otherwise would have decided to delete the first node due to insufficient self-efficacy).
- the method 300 may include one or more of operations 422 and 424.
- One or more of operations 422 and 424 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- the signal e.g., a feedback signal
- the app 200 causes (e.g., commands, instructs, or triggers) the machine learning model 210 to replicate the first node (e.g., node 221) within the group 212 of nodes by creating a child node (e.g., similar to node 221) based on the first node (e.g., node 221).
- the app 200 may cause the machine learning model 210 to initiate (e.g., launch or execute) a node replication process (e.g., an algorithm that divides a parent node into two or more child nodes, or spawns one or more child nodes from a parent node).
- a node replication process e.g., an algorithm that divides a parent node into two or more child nodes, or spawns one or more child nodes from a parent node.
- the app 200 configures the child node created in operation 422 to provide feedback to the first node (e.g., node 221).
- the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the method 300 may include one or more of operations 522 and 524.
- One or more of operations 522 and 524 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- the signal e.g., a feedback signal
- the app 200 accesses a second node (e.g., node 222) within the group 212 of nodes.
- the second node e.g., node 222
- the second node may be a peer node of the first node (e.g., node 221), a parent node of the first node (e.g., node 221), a child node of the first node (e.g., node 221), or a node of some other relationship with the first node (e.g., node 221) within the group 212 of nodes.
- the app 200 configures the second node (e.g., node 222) accessed in operation 522 to provide feedback to the first node (e.g., node 221).
- the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the method 300 may include one or more of operations 622 and 624.
- One or more of operations 622 and 624 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- the signal e.g., a feedback signal
- the app 200 accesses a device (e.g., device 130) that is communicatively coupled (e.g., via the network 190) to the machine learning model 210 (e.g., via the app 200, the Al machine 110, or both).
- the app 200 may initiate a network connection with the device (e.g., device 130) or initiate communications with the device (e.g., device 130) via an existing network connection).
- the app 200 instead receives communication (e.g., feedback) initiated by the device (e.g., device 130), and the received communication may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- communication e.g., feedback
- the device e.g., device 130
- the received communication may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the app 200 causes (e.g., (e.g., commands, instructs, or triggers) the device (e.g., device 130) accessed in operation 622 to provide feedback to the first node (e.g., node 221).
- the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the app 200 provides all or part of the communication initiated by the device (e.g., device 130) to the first node (e.g., node 221), such as by providing the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the signal e.g., as described above with respect to operation 320
- the method 300 may include operation 722, which may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- operation 722 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- a second node (e.g., node 222) among the group 212 of nodes is already configured (e.g., by the machine learning model 210) to provide feedback to the first node (e.g., node 221), and the first node (e.g., node 221) is already configured (e.g., by the machine learning model 210) to determine its self-efficacy based on the feedback provided by the second node (e.g., node 222).
- the app 200 provides further (e.g., additional) feedback to the first node (e.g., node 221).
- the further feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
- the method 300 may include one or more of operations 822, 832, and 842.
- Operation 822 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
- the signal e.g., a feedback signal
- the app 200 provides an (e.g., intentionally) inaccurate (e.g., incorrect, false, or untrue, yet still effective for modifying self- efficacy) signal that indicates that an inaccurate degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., 221) is increased.
- an inaccurate e.g., incorrect, false, or untrue, yet still effective for modifying self- efficacy
- Operation 832 may be performed as part of operation 330, in which the first node (e.g., node 221) configures itself to differently perform the function of the group 212 of nodes.
- the first node e.g., node 221
- the provided inaccurate signal causes the first node (e.g., node 221) to incorrectly determine that its self-efficacy is increased. Accordingly, the first node (e.g., node 221) configures (e.g., reconfigures) itself to differently (e.g., better) perform the function of the group 212 of nodes based on the incorrect self-efficacy of the first node (e.g., node 221).
- Operation 842 may be performed as part of operation 340, in which the machine learning model 210 forbears (e.g., refrains or otherwise prevents itself) from deleting the first node (e.g., node 221) based on the increased self- efficacy of the first node (e.g., node 221). This forbearance may even occur at a time or otherwise during a condition in which the machine learning model 210 otherwise would have decided to delete the first node (e.g., node 221) due to insufficient self-efficacy,
- the machine learning model 210 forbears (e.g., refrains) from deleting the first node (e.g., node 221), in response to the incorrect self-efficacy of the first node (e.g., node 221), as determined in operation 832.
- one or more of the methodologies described herein may facilitate modification of self-efficacy in an Al node (e.g., a self-evolving Al node, such as node 221, within a self-evolving machine learning model, such as machine learning model 210). Moreover, one or more of the methodologies described herein may facilitate control or other influence on the Al node’s presence (e.g., lifespan) within a machine learning model (e.g., machine learning model 210).
- an Al node e.g., a self-evolving Al node, such as node 221, within a self-evolving machine learning model, such as machine learning model 210.
- one or more of the methodologies described herein may facilitate extension of the Al node’s presence in its group of Al nodes (e.g., group 212 of nodes), as well as prolonging of the Al node’s contributions to its group of Al nodes, compared to capabilities of pre-existing systems and methods.
- FIG. 9 is a block diagram illustrating components of a machine 900, according to some example embodiments, able to read instructions 924 from a machine-readable medium 922 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part.
- a machine-readable medium 922 e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof
- FIG. 9 e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof
- the machine 900 in the example form of a computer system (e.g., a computer) within which the instructions 924 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
- the machine 900 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines.
- the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment.
- the machine 900 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 924, sequentially or otherwise, that specify actions to be taken by that machine.
- PC personal computer
- PDA personal digital assistant
- STB set-top box
- web appliance a network router, a network switch, a network bridge, or any machine capable of executing the instructions 924, sequentially or otherwise, that specify actions to be taken by that machine.
- the machine 900 includes a processor 902 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 904, and a static memory 906, which are configured to communicate with each other via a bus 908.
- the processor 902 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 924 such that the processor 902 is configurable to perform any one or more of the methodologies described herein, in whole or in part.
- a set of one or more microcircuits of the processor 902 may be configurable to execute one or more modules (e.g., software modules) described herein.
- the processor 902 is a multicore CPU (e.g., a dual -core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part.
- beneficial effects described herein may be provided by the machine 900 with at least the processor 902, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, a purely biological system, or any suitable combination thereof), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
- processors e.g., a purely mechanical system, a purely hydraulic system, a purely biological system, or any suitable combination thereof
- the machine 900 may further include a graphics display 910 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
- a graphics display 910 e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
- PDP plasma display panel
- LED light emitting diode
- LCD liquid crystal display
- CRT cathode ray tube
- the machine 900 may also include an alphanumeric input device 912 (e.g., a keyboard or keypad), a pointer input device 914 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 916, an audio generation device 918 (e.g., a sound card, an amplifier, a speaker, a headphone j ack, or any suitable combination thereof), and a network interface device 920.
- an alphanumeric input device 912 e.g., a keyboard or keypad
- a pointer input device 914 e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument
- a data storage 916 e.g., an audio generation device 918
- the data storage 916 (e.g., a data storage device) includes the machine-readable medium 922 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 924 embodying any one or more of the methodologies or functions described herein.
- the instructions 924 may also reside, completely or at least partially, within the main memory 904, within the static memory 906, within the processor 902 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 900. Accordingly, the main memory 904, the static memory 906, and the processor 902 may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media).
- the instructions 924 may be transmitted or received over the network 190 via the network interface device 920.
- the network interface device 920 may communicate the instructions 924 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
- HTTP hypertext transfer protocol
- the machine 900 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 930 (e.g., sensors or gauges).
- a portable computing device e.g., a smart phone, a tablet computer, or a wearable device
- additional input components 930 e.g., sensors or gauges
- Examples of such input components 930 include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor).
- an image input component e.g., one or more cameras
- an audio input component e.g., one or more microphones
- a direction input component e.g., a compass
- a location input component e.g., a global positioning system (GPS) receiver
- GPS global positioning system
- an orientation component e.g.,
- Input data gathered by any one or more of these input components 930 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof).
- the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 922 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions.
- machine- readable medium shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 924 for execution by the machine 900, such that the instructions 924, when executed by one or more processors of the machine 900 (e.g., processor 902), cause the machine 900 to perform any one or more of the methodologies described herein, in whole or in part.
- a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices.
- machine-readable medium shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
- the instructions 924 for execution by the machine 900 can be communicated via a carrier medium (e.g., a machine-readable carrier medium).
- Examples of such a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 924).
- a non-transient carrier medium e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place
- a transient carrier medium e.g., a carrier wave or other propagating signal that communicates the instructions 924.
- Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof.
- a “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner.
- one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
- a hardware module may be implemented mechanically, electronically, hydraulically, biologically, or any suitable combination thereof.
- a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations.
- a hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC.
- FPGA field programmable gate array
- a hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
- a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
- the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
- the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times.
- Software e.g., a software module
- Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
- a resource e.g., a collection of information from a computing resource
- processors may be temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein.
- processor-implemented module refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.
- processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines.
- SaaS software as a service
- the one or more processors or hardware modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Feedback Control In General (AREA)
Abstract
A machine modifies the self-efficacy of a node by accessing a machine learning model that includes a group of nodes configured to perform a function. The machine learning model is configured to delete nodes from the group based on their self-efficacies falling below a threshold. In the group, a first node determines its self-efficacy by determining a degree to which its outputs affect its inputs. The machine provides, to the first node, a signal that indicates the degree to which the first node's outputs affect the first node's inputs is increased. The provided signal causes the first node to determine that its self-efficacy is increased, and the first node configures itself to differently perform the function, based on the increased self-efficacy of the first node. The machine learning model forbears from deleting the first node, in response to the increased self-efficacy of the first node.
Description
MODIFYING SELF-EFFICACY IN ARTIFICIAL INTELLIGENCE NODES
TECHNICAL FIELD
[0001] The subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate artificial intelligence (Al), including software-configured computerized variants of such specialpurpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other special-purpose machines that facilitate Al. Specifically, the present disclosure addresses systems and methods to facilitate modifying self-efficacy in one or more nodes of an Al (e.g., a machine learning model).
BACKGROUND
[0002] Within the field of Al, machine learning models implement one or more algorithms or other techniques for performance on computer hardware and are configured to learn from experiences processing data and make decisions (e.g., predictions, inferences, or categorizations) without explicit programming. Machine learning models may utilize one or more statistical methods, mathematical optimization, pattern recognition techniques, or any suitable combination thereof, to identify patterns or other relationships within data. Machine learning models are now widely used across various domains, such as image recognition, speech recognition, natural language processing, classifiers, recommendation generators, and anomaly detectors. In traditional machine learning models, the model is trained using historical information that is labeled with one or more features, and after training, the trained machine learning model is provided with new unlabeled input data to generate inferences about the new data.
BRIEF DESCRIPTION OF THE DRAWINGS
[0003] Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
[0004] FIG. l is a network diagram illustrating a network environment suitable for modifying self-efficacy in an Al node, according to some example embodiments.
[0005] FIG. 2 is a block diagram illustrating components of an Al machine suitable for modifying self-efficacy in an Al node, according to some example embodiments.
[0006] FIGS. 3-8 are flowcharts illustrating operations of the Al machine in performing a method of modifying self-efficacy in an Al node, according to some example embodiments.
[0007] FIG. 9 is a block diagram illustrating components of a machine, according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
DETAILED DESCRIPTION
[0008] Example methods (e.g., algorithms) facilitate modifying self- efficacy in an Al node (e.g., a node in a machine learning model), and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate modifying self-efficacy in an Al node. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
[0009] A certain type of machine learning model (e.g., as an example of an Al engine or other Al model) can be configured to add to itself one or more selfevolving nodes (e.g., within a group of such nodes in a node network, such as within a layer of the node network), delete from itself one or more self-evolving nodes (e.g., within the group), or any suitable combination thereof, in response to changes in input data processed by the machine learning model (e.g., specifically by the group of nodes). Some illustrative examples of such machine learning models configured to resize a group of nodes are provided in International Application No. PCT/US2023/031616, titled “RESIZING NODE GROUPS IN MACHINE LEARNING MODELS,” and filed August 31, 2023. According to the systems and methods discussed herein, one possible basis for the machine learning model in deciding to delete or preserve (e.g., forbear from deleting) such a node is that node’s self-efficacy.
[0010] As used herein, the “self-efficacy” of a node represents a degree (e.g., extent) to which the node’s own outputs (e.g., in performing data processing operations in accordance with a function of the node or its group of nodes) are observed by that same node to control inputs received later by that same node. That is, a node’s self-efficacy indicates how much the node’s output affects (e.g., predicts congruence or predicts incongruence) subsequent inputs to that node (e.g., from other nodes). The node’s self-efficacy may be high if the node’s outputs are strongly correlated with later-received inputs to that node (e.g., the node’s outputs always or very often reappear later as inputs to that node). The node’s self-efficacy may also be high if the opposite of the node’s outputs are strongly correlated with later-received inputs to that node (e.g., the opposites of the node’s outputs always or very often reappear later as inputs to that node). In contrast, the node’s self-efficacy may low where the node’s outputs are weakly correlated, or not correlated at all, with later-received inputs to that node (e.g., the node’s outputs appear to have little to no effect on inputs received later by the node).
[0011] Regarding types of inputs to a node, for example, a node may be configured to receive direct input (e.g., specifically to be processed by the node as part of performing the function of the group of nodes) from one or more sources (e.g., one or more nodes inside or outside the group of nodes), and
additionally, that same node may be configured to receive observational input (e.g., informational for the node or otherwise not specifically to be processed as part of performing the function of the group) from one or more other nodes in its group (e.g., peer nodes, child nodes, or any suitable combination thereof) regarding the outputs of those one or more other nodes in the group. In some example implementations, the self-efficacy of a node is determined based on one or more observational inputs only, without influence from any direct inputs. In some other example implementations, the self-efficacy of a node is determined based on one or more direct inputs only, without influence from any observational inputs. In some further example implementations, the self- efficacy of a node is determined based on a combination of one or more direct inputs and one or more observational inputs.
[0012] Thus, the self-efficacy of a node may be a value that represents a likelihood (e.g., absolute or relative) that the node’s output will be affirmed (e.g., matched or equaled, exactly or approximately) or negated (e.g., inverted or opposed, exactly or approximately) by one or more other nodes in the same group of nodes. The degree to which the node’s output is affirmed or negated may range from a high degree (e.g., corresponding to high matching, high similarity, or otherwise high congruence with other nodes in the group, or to high opposition, high dissimilarity, or otherwise high incongruence with other nodes in the group) to a low degree (e.g., corresponding to little matching, little similarity, or otherwise little congruence with other nodes in the group, as well as little opposition, little dissimilarity, or otherwise little incongruence with other nodes I the group).
[0013] Accordingly, a machine learning model may be configured to delete (e.g., prune) any node whose self-efficacy falls below a threshold self- efficacy (e.g., a predetermined threshold self-efficacy value), which may indicate that the node’s output insufficiently affects or predicts its later inputs, which may be or include outputs of other nodes in the group (e.g., performing the same function). In various example implementations, the threshold self- efficacy may be expressed as a proportion relative to other nodes in the group, such as a certain quartile (e.g., deleting nodes with the lowest 25% self-efficacy) or a certain decile (e.g., deleting nodes with the lowest 10% or 20% self-
efficacy). In other example implementations, the threshold self-efficacy is expressed as an absolute numerical value (e.g., deleting nodes with self- efficacies at or below a certain floating point number, such as 0.05 or 0.125).
[0014] Within such a machine learning model, a node may be manipulated (e.g., externally influenced) to modify (e.g., adjust, update, or otherwise alter) its self-efficacy, accurately or even inaccurately, thus causing (e.g., triggering) the node to configure (e.g., reconfigure) itself to differently (e.g., better) perform the function of its group of nodes. In particular, causing a node to behave as if its self-efficacy is increased, whether such increase is actually true or not, may cause the node to avoid being deleted by the machine learning model for having insufficient self-efficacy (e.g., self-determined by the node itself). Hence, such modification (e.g., adjustment, altering, or manipulation) of a node’s self- efficacy may have the effect of prolonging the node’s presence and contributions to performance of the function within its group of nodes, which may include providing more time and opportunity for the node to self-evolve towards stronger alignment with, or stronger opposition to, its peer nodes in performing that function.
[0015] Accordingly, a machine (e.g., an Al machine) may modify the self- efficacy of a node by first accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes. The machine learning model is configured to monitor the self-efficacies of nodes in the group and delete one or more nodes from the group based on (e.g., in response to) any of their self-efficacies falling below a threshold self-efficacy. In the group, a first node is configured to determine its self-efficacy by determining a degree to which one or more of its outputs predict one or more of its inputs (e.g., direct, observational, or both). The machine then provides, to the first node, a signal that indicates that the degree to which the first node’s outputs affect the first node’s inputs is increased. As a result, the provided signal causes the first node to determine that its self-efficacy is increased, and the first node accordingly configures itself to differently (e.g., better) perform the function of the group, based on the increased self-efficacy of the first node.
[0016] As a further consequence, the machine learning model forbears from deleting the first node (e.g., decides not to delete the first node, which
otherwise would have been deleted due to insufficient self-efficacy), in response to the increased self-efficacy of the first node. The forbearance of the machine learning model from deleting the node may endure until the next time that node’s self-efficacy falls below the threshold self-efficacy. As a result, the first node may have its presence in the group extended by the provided signal (e.g., prolonging its “life” within the group of nodes), thus obtaining more time for the first node to self-evolve toward better performance of the function of the group of nodes, compared to what otherwise would have occurred without providing the signal to modify the first node’s self-efficacy.
[0017] FIG. 1 is a network diagram illustrating a network environment 100 suitable for modifying (e.g., adjusting) self-efficacy in an Al node, according to some example embodiments. The network environment 100 includes an Al machine 110, a database 115, and devices 130 and 150, all communicatively coupled to each other via a network 190. The Al machine 110, with or without the database 115, may form all or part of a cloud 118 (e.g., a geographically distributed set of multiple machines configured to function as a single server), which may form all or part of a network-based system 105 (e.g., a cloud-based server system configured to provide one or more network-based services to the devices 130 and 150). The Al machine 110 and the devices 130 and 150 may each be implemented in a special-purpose (e.g., specialized) computer system, in whole or in part, as described below with respect to FIG. 9.
[0018] Also shown in FIG. 1 are users 132 and 152. One or both of the users 132 and 152 may be a human user (e.g., a human being), a machine user (e.g., a computer configured by a software program to interact with the device 130 or 150), or any suitable combination thereof (e.g., a human assisted by a machine or a machine supervised by a human). The user 132 is associated with the device 130 and may be a user of the device 130. For example, the device 130 may be a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 132. Likewise, the user 152 is associated with the device 150 and may be a user of the device 150. As an example, the device 150 may be
a desktop computer, a vehicle computer, a home media system (e.g., a home theater system or other home entertainment system), a tablet computer, a navigational device, a portable media device, a smart phone, or a wearable device (e.g., a smart watch, smart glasses, smart clothing, or smart jewelry) belonging to the user 152.
[0019] Any of the systems or machines (e.g., databases and devices) shown in FIG. 1 may be, include, or otherwise be implemented in a specialpurpose (e.g., specialized or otherwise non-conventional and non-generic) computer that has been modified to perform one or more of the functions described herein for that system or machine (e.g., configured or programmed by special-purpose software, such as one or more software modules of a specialpurpose application, operating system, firmware, middleware, or other software program). For example, a special-purpose computer system able to implement any one or more of the methodologies described herein is discussed below with respect to FIG. 9, and such a special-purpose computer may accordingly be a means for performing any one or more of the methodologies discussed herein. Within the technical field of such special-purpose computers, a special-purpose computer that has been specially modified (e.g., configured by special-purpose software) by the structures discussed herein to perform the functions discussed herein is technically improved compared to other special-purpose computers that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein. Accordingly, a special-purpose machine configured according to the systems and methods discussed herein provides an improvement to the technology of similar special-purpose machines.
[0020] As used herein, a “database” is a data storage resource and may store data structured in any of various ways, for example, as a text file, a table, a spreadsheet, a relational database (e.g., an object-relational database), a triple store, a hierarchical data store, a document database, a graph database, key -value pairs, or any suitable combination thereof. Moreover, any two or more of the systems or machines illustrated in FIG. 1 may be combined into a single system or machine, and the functions described herein for any single system or machine may be subdivided among multiple systems or machines.
[0021] The network 190 may be any network that enables communication between or among systems, machines, databases, and devices (e.g., between the machine 110 and the device 130). Accordingly, the network 190 may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network 190 may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof. Accordingly, the network 190 may include one or more portions that incorporate a local area network (LAN), a wide area network (WAN), the Internet, a mobile telephone network (e.g., a cellular network), a wired telephone network (e.g., a plain old telephone service (POTS) network), a wireless data network (e.g., a WiFi network or WiMax network), or any suitable combination thereof. Any one or more portions of the network 190 may communicate information via a transmission medium. As used herein, “transmission medium” refers to any intangible (e.g., transitory) medium that is capable of communicating (e.g., transmitting) instructions for execution by a machine (e.g., by one or more processors of such a machine), and includes digital or analog communication signals or other intangible media to facilitate communication of such software.
[0022] FIG. 2 is a block diagram illustrating components of the Al machine 110, according to some example embodiments. The Al machine 110 is shown as including a machine learning model 210 and a device interface 250, configured to communicate with each other (e.g., via a bus, shared memory, or a switch). The machine learning model 210 includes a group 212 of nodes, and the group 212 includes multiple nodes, such as a node 221 (e.g., a first node) and a node 222 (e.g., a second node), which may be a peer of the node 221, a parent of the node 221, a child of the node 221, or a node of some other relationship with the node 221 within the group 212 of nodes.
[0023] The device interface 250 is configured to interact with one or more devices (e.g., device 130, device 150, or both). In particular, app 200 may be configured to cause the device 130 to send information (e.g., a signal, such as a feedback signal) to the machine learning model 210 (e.g., to the node 221 within the group 212 of nodes), by causing the device interface 250 to send a suitably configured command, request, or other instruction to the device 130 for causing
the device 130 to send that information to the machine learning model 210. Provision of information (e.g., a signal) to a node (e.g., node 221) may be performed in a memory via software (e.g., one or more software instructions) executing on one or more processors (e.g., processors 299).
[0024] As shown in FIG. 2, the machine learning model 210 and the device interface 250 may form all or part of an app 200 (e.g., a mobile app, a server app, or other computer program) that is stored (e.g., installed) on the Al machine 110 (e.g., responsive to or otherwise as a result of data being received via the network 190, such as from the database 115 or from the device 130). Furthermore, one or more processors 299 (e.g., hardware processors, digital processors, or any suitable combination thereof) may be included (e.g., temporarily or permanently) in the app 200, the machine learning model 210, the device interface 250, or any suitable combination thereof.
[0025] Any one or more of the components (e.g., modules) described herein may be implemented using hardware alone (e.g., one or more of the processors 299) or a combination of hardware and software. For example, any component described herein may physically include an arrangement of one or more of the processors 299 (e.g., a subset of or among the processors 299) configured to perform the operations described herein for that component. As another example, any component described herein may include software, hardware, or both, that configure an arrangement of one or more of the processors 299 to perform the operations described herein for that component. Accordingly, different components described herein may include and configure different arrangements of the processors 299 at different points in time or a single arrangement of the processors 299 at different points in time. Each component (e.g., module) described herein is an example of a means for performing the operations described herein for that component. Moreover, any two or more components described herein may be combined into a single component, and the functions described herein for a single component may be subdivided among multiple components. Furthermore, according to various example embodiments, components described herein as being implemented within a single system or machine (e.g., a single device) may be distributed across multiple systems or machines (e.g., multiple devices).
[0026] FIGS. 3-8 are flowcharts illustrating operations of the Al machine 110 in performing a method 300 of modifying self-efficacy in an Al node, such as the node 221 (e.g., a first node in the group 212 of nodes), according to some example embodiments. Operations in the method 300 may be performed by the Al machine 110, using components (e.g., modules) described above with respect to FIG. 2, using one or more processors (e.g., microprocessors or other hardware processors), or using any suitable combination thereof. As shown in FIG. 3, the method 300 includes operations 310, 320, 330, and 340.
[0027] In operation 310, the app 200 accesses the machine learning model 210 that includes the group 212 of nodes, which is configured to perform a function of the group 212 of nodes. According to some example embodiments, the machine learning model 210, the group 212 of nodes may be configured to update the function of the group 212 of nodes based on self-efficacies (e.g., monitored by the machine learning model 210, the app 200, or any suitable combination thereof) of the nodes (e.g., node 221, node 222, or both) in the group 212 of nodes. The machine learning model 210 may be configured to monitor the self-efficacies of nodes (e.g., nodes 221 and 222) in the group 212 of nodes and to delete one or more nodes from the group 212 of nodes based on (e.g., in response to) their self-efficacies falling below a threshold self-efficacy. A first node (e.g., node 221) among the group 212 of nodes is configured to determine its self-efficacy by determining a degree (e.g., extent) to which outputs of the first node affect inputs to the first node (e.g., as direct inputs for first node to perform the function of the group 212 of nodes, as observational inputs of outputs from other nodes in performing the function of the group 212, or any suitable combination thereof).
[0028] In operation 320, the app 200 provides, to the first node (e.g., node 221), a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased. According to some example embodiments, the providing of the signal to the first node (e.g., node 221) occurs at an elapsed time since the self-efficacy of the first node (e.g., node 221) was last monitored (e.g., by the machine learning model 210, by the app 200, or any suitable combination thereof). Further details of various example embodiments are discussed below (e.g., with respect to FIGS. 3-8).
[0029] In operation 330, the provided signal causes the first node (e.g., node 221) to determine that its self-efficacy is increased. Accordingly, the first node (e.g., node 221) configures (e.g., reconfigures) itself to differently (e.g., better) perform the function of the group 212 of nodes based on the increased self-efficacy of the first node (e.g., node 221). In some example embodiments, the first node (e.g., node 221) configures itself to differently (e.g., better) perform the function of the group 212 of nodes based on the elapsed time since the self-efficacy of the first node (e.g., node 221) was last monitored (by the machine learning model 210, by the app 200, or any suitable combination thereof).
[0030] In operation 340, the machine learning model 210 forbears (e.g., refrains) from deleting the first node (e.g., node 221) based on (e.g., in response to) the increased self-efficacy of the first node (e.g., node 221). For example, influenced by the increased self-efficacy of the first node (e.g., node 221), the machine learning model 210 may automatically decide (e.g., choose) to refrain from deleting the first node (e.g., at a time or otherwise during a condition in which the machine learning model 210 otherwise would have decided to delete the first node due to insufficient self-efficacy).
[0031] As shown in FIG. 4, in addition to any one or more of the operations previously described, the method 300 may include one or more of operations 422 and 424. One or more of operations 422 and 424 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
[0032] In operation 422, the app 200 causes (e.g., commands, instructs, or triggers) the machine learning model 210 to replicate the first node (e.g., node 221) within the group 212 of nodes by creating a child node (e.g., similar to node 221) based on the first node (e.g., node 221). For example, the app 200 may cause the machine learning model 210 to initiate (e.g., launch or execute) a node replication process (e.g., an algorithm that divides a parent node into two or more child nodes, or spawns one or more child nodes from a parent node).
[0033] In operation 424, the app 200 configures the child node created in operation 422 to provide feedback to the first node (e.g., node 221). In
particular, the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
[0034] As shown in FIG. 5, in addition to any one or more of the operations previously described, the method 300 may include one or more of operations 522 and 524. One or more of operations 522 and 524 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
[0035] In operation 522, the app 200 accesses a second node (e.g., node 222) within the group 212 of nodes. The second node (e.g., node 222) may be a peer node of the first node (e.g., node 221), a parent node of the first node (e.g., node 221), a child node of the first node (e.g., node 221), or a node of some other relationship with the first node (e.g., node 221) within the group 212 of nodes.
[0036] In operation 524, the app 200 configures the second node (e.g., node 222) accessed in operation 522 to provide feedback to the first node (e.g., node 221). In particular, the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
[0037] As shown in FIG. 6, in addition to any one or more of the operations previously described, the method 300 may include one or more of operations 622 and 624. One or more of operations 622 and 624 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
[0038] In operation 622, the app 200 accesses a device (e.g., device 130) that is communicatively coupled (e.g., via the network 190) to the machine learning model 210 (e.g., via the app 200, the Al machine 110, or both). For example, the app 200 may initiate a network connection with the device (e.g.,
device 130) or initiate communications with the device (e.g., device 130) via an existing network connection). In some example embodiments, the app 200 instead receives communication (e.g., feedback) initiated by the device (e.g., device 130), and the received communication may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
[0039] In operation 624, the app 200 causes (e.g., (e.g., commands, instructs, or triggers) the device (e.g., device 130) accessed in operation 622 to provide feedback to the first node (e.g., node 221). In particular, the feedback may include the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased. In some example embodiments, the app 200 provides all or part of the communication initiated by the device (e.g., device 130) to the first node (e.g., node 221), such as by providing the signal (e.g., as described above with respect to operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
[0040] As shown in FIG. 7, in addition to any one or more of the operations previously described, the method 300 may include operation 722, which may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
[0041] In example embodiments that include operation 722, a second node (e.g., node 222) among the group 212 of nodes is already configured (e.g., by the machine learning model 210) to provide feedback to the first node (e.g., node 221), and the first node (e.g., node 221) is already configured (e.g., by the machine learning model 210) to determine its self-efficacy based on the feedback provided by the second node (e.g., node 222).
[0042] Accordingly, in operation 722, the app 200 provides further (e.g., additional) feedback to the first node (e.g., node 221). In particular, the further feedback may include the signal (e.g., as described above with respect to
operation 320) that indicates that the degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., node 221) is increased.
[0043] As shown in FIG. 8, in addition to any one or more of the operations previously described, the method 300 may include one or more of operations 822, 832, and 842. Operation 822 may be performed as part (e.g., a precursor task, a subroutine, or a portion) of operation 320, in which the app 200 provides the signal (e.g., a feedback signal) to the first node (e.g., node 221).
[0044] In operation 822, the app 200 provides an (e.g., intentionally) inaccurate (e.g., incorrect, false, or untrue, yet still effective for modifying self- efficacy) signal that indicates that an inaccurate degree to which outputs of the first node (e.g., node 221) affect inputs to the first node (e.g., 221) is increased.
[0045] Operation 832 may be performed as part of operation 330, in which the first node (e.g., node 221) configures itself to differently perform the function of the group 212 of nodes.
[0046] In operation 832, the provided inaccurate signal causes the first node (e.g., node 221) to incorrectly determine that its self-efficacy is increased. Accordingly, the first node (e.g., node 221) configures (e.g., reconfigures) itself to differently (e.g., better) perform the function of the group 212 of nodes based on the incorrect self-efficacy of the first node (e.g., node 221).
[0047] Operation 842 may be performed as part of operation 340, in which the machine learning model 210 forbears (e.g., refrains or otherwise prevents itself) from deleting the first node (e.g., node 221) based on the increased self- efficacy of the first node (e.g., node 221). This forbearance may even occur at a time or otherwise during a condition in which the machine learning model 210 otherwise would have decided to delete the first node (e.g., node 221) due to insufficient self-efficacy,
[0048] In operation 842, the machine learning model 210 forbears (e.g., refrains) from deleting the first node (e.g., node 221), in response to the incorrect self-efficacy of the first node (e.g., node 221), as determined in operation 832.
[0049] According to various example embodiments, one or more of the methodologies described herein may facilitate modification of self-efficacy in an Al node (e.g., a self-evolving Al node, such as node 221, within a self-evolving
machine learning model, such as machine learning model 210). Moreover, one or more of the methodologies described herein may facilitate control or other influence on the Al node’s presence (e.g., lifespan) within a machine learning model (e.g., machine learning model 210). Hence, one or more of the methodologies described herein may facilitate extension of the Al node’s presence in its group of Al nodes (e.g., group 212 of nodes), as well as prolonging of the Al node’s contributions to its group of Al nodes, compared to capabilities of pre-existing systems and methods.
[0050] When these effects are considered in aggregate, one or more of the methodologies described herein may obviate a need for certain efforts or resources that otherwise would be involved in modifying self-efficacy in one or more Al nodes. Efforts expended by a user in modifying self-efficacy in one or more Al nodes may be reduced by use of (e.g., reliance upon) a special -purpose machine that implements one or more of the methodologies described herein. Computing resources used by one or more systems or machines (e.g., within the network environment 100) may similarly be reduced (e.g., compared to systems or machines that lack the structures discussed herein or are otherwise unable to perform the functions discussed herein). Examples of such computing resources include processor cycles, network traffic, computational capacity, main memory usage, graphics rendering capacity, graphics memory usage, data storage capacity, power consumption, and cooling capacity.
[0051] FIG. 9 is a block diagram illustrating components of a machine 900, according to some example embodiments, able to read instructions 924 from a machine-readable medium 922 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part. Specifically, FIG. 9 shows the machine 900 in the example form of a computer system (e.g., a computer) within which the instructions 924 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 900 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
[0052] In alternative embodiments, the machine 900 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine 900 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine 900 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 924, sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute the instructions 924 to perform all or part of any one or more of the methodologies discussed herein.
[0053] The machine 900 includes a processor 902 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 904, and a static memory 906, which are configured to communicate with each other via a bus 908. The processor 902 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 924 such that the processor 902 is configurable to perform any one or more of the methodologies described herein, in whole or in part. For example, a set of one or more microcircuits of the processor 902 may be configurable to execute one or more modules (e.g., software modules) described herein. In some example embodiments, the processor 902 is a multicore CPU (e.g., a dual -core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part. Although the beneficial effects described herein may be provided by the machine 900 with at least the processor 902, these same
beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, a purely biological system, or any suitable combination thereof), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
[0054] The machine 900 may further include a graphics display 910 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video). The machine 900 may also include an alphanumeric input device 912 (e.g., a keyboard or keypad), a pointer input device 914 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 916, an audio generation device 918 (e.g., a sound card, an amplifier, a speaker, a headphone j ack, or any suitable combination thereof), and a network interface device 920.
[0055] The data storage 916 (e.g., a data storage device) includes the machine-readable medium 922 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 924 embodying any one or more of the methodologies or functions described herein. The instructions 924 may also reside, completely or at least partially, within the main memory 904, within the static memory 906, within the processor 902 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 900. Accordingly, the main memory 904, the static memory 906, and the processor 902 may be considered machine-readable media (e.g., tangible and non-transitory machine-readable media). The instructions 924 may be transmitted or received over the network 190 via the network interface device 920. For example, the network interface device 920 may communicate the instructions 924 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
[0056] In some example embodiments, the machine 900 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 930 (e.g., sensors or gauges). Examples of such input components 930 include an image input
component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor). Input data gathered by any one or more of these input components 930 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof).
[0057] As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 922 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine- readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 924 for execution by the machine 900, such that the instructions 924, when executed by one or more processors of the machine 900 (e.g., processor 902), cause the machine 900 to perform any one or more of the methodologies described herein, in whole or in part. Accordingly, a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
[0058] A “non-transitory” machine-readable medium, as used herein, specifically excludes propagating signals per se. According to various example embodiments, the instructions 924 for execution by the machine 900 can be communicated via a carrier medium (e.g., a machine-readable carrier medium). Examples of such a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 924).
[0059] Certain example embodiments are described herein as including modules. Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
[0060] In some example embodiments, a hardware module may be implemented mechanically, electronically, hydraulically, biologically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. As an example, a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
[0061] Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Furthermore, as used herein, the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times. Software (e.g., a software module) may accordingly configure one or more processors, for example, to become or otherwise constitute a particular hardware module at one instance of time and to become or otherwise constitute a different hardware module at a different instance of time.
[0062] Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
[0063] The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily
configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor- implemented module” refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.
[0064] Moreover, such one or more processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines. In some example embodiments, the one or more processors or hardware modules (e.g., processor-implemented modules) may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
[0065] Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and their functionality presented as separate components and functions in example configurations may be implemented as a combined
structure or component with combined functions. Similarly, structures and functionality presented as a single component may be implemented as separate components and functions. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
[0066] Some portions of the subject matter discussed herein may be presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a memory (e.g., a computer memory or other machine memory). Such algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consi stent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
[0067] Unless specifically stated otherwise, discussions herein using words such as “accessing,” “processing,” “detecting,” “computing,” “calculating,” “determining,” “generating,” “presenting,” “displaying,” or the like refer to actions or processes performable by a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms “a” or “an” are herein used, as is common in patent documents, to include one or more than one instance. Finally, as used herein, the conjunction “or” refers to a nonexclusive “or,” unless specifically stated otherwise.
[0068] The following enumerated descriptions describe various examples of methods, machine-readable media, and systems (e.g., machines, devices, or other apparatus) discussed herein. Any one or more features of an example, taken in isolation or combination, should be considered as being within the disclosure of this application.
[0069] A first example provides a method comprising: accessing, by one or more processors, a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being configured to monitor self-efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self-efficacies falling below a threshold self-efficacy, a first node among the group of nodes being configured to determine its self-efficacy by determining a degree to which outputs of the first node affect inputs to the first node (e.g., as direct inputs for first node to perform the function of the group of nodes, as observational inputs of outputs from other nodes in performing the function of the group, or any suitable combination thereof); and providing, to the first node and by the one or more processors, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self-efficacy of the first node.
[0070] A second example provides a method according to the first example, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0071] A third example provides a method according to the first example or the second example, wherein: the providing of the signal to the first node includes: accessing a second node within the group of nodes; and configuring the second node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0072] A fourth example provides a method according to any of the first through third examples, wherein: the providing of the signal to the first node includes: accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0073] A fifth example provides a method according to any of the first through fourth examples, wherein: the providing of the signal to the first node includes: receiving feedback from a device, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased; and providing the feedback received from the device to the first node.
[0074] A sixth example provides a method accordingly to any of the first through fifth examples, wherein: a second node among the group of nodes is configured to provide feedback to the first node; the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and
the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0075] A seventh example provides a method according to any of the first through sixth examples, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
[0076] An eighth example provides a method according to any of the first through seventh examples, wherein: within the machine learning model, the group of nodes is configured to update the function of the group of nodes based on the monitored self-efficacies of the nodes in the group of nodes.
[0077] A ninth example provides a method according to any of the first through eighth examples, wherein: the providing of the signal to the first node provides an inaccurate signal that indicates that an inaccurate degree to which outputs of the first node affect inputs to the first node is increased, the provided inaccurate signal causing the first node to incorrectly determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the incorrect self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the incorrect self-efficacy of the first node.
[0078] A tenth example provide a machine-readable medium (e.g., a non- transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being
configured to monitor self-efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self-efficacies falling below a threshold self-efficacy, a first node among the group of nodes being configured to determine its self-efficacy by determining a degree to which outputs of the first node affect inputs to the first node (e.g., as direct inputs for first node to perform the function of the group of nodes, as observational inputs of outputs from other nodes in performing the function of the group, or any suitable combination thereof); and providing, to the first node, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self- efficacy of the first node.
[0079] An eleventh example provides a machine-readable medium according to the tenth example, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0080] A twelfth example provides a machine-readable medium according to the tenth example or the eleventh example, wherein: the providing of the signal to the first node includes: accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0081] A thirteenth example provides a machine-readable medium according to any of the tenth through twelfth examples, wherein: a second node among the group of nodes is configured to provide feedback to the first node; the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0082] A fourteenth example provides a machine-readable medium according to any of the tenth through thirteenth examples, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
[0083] A fifteenth example provides a machine-readable medium according to any of the tenth through fourteenth examples, wherein: the providing of the signal to the first node provides an inaccurate signal that indicates that an inaccurate degree to which outputs of the first node affect inputs to the first node is increased, the provided inaccurate signal causing the first node to incorrectly determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the incorrect self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the incorrect self-efficacy of the first node.
[0084] A sixteenth example provides a system (e.g., a computer system that includes one or more computers, devices, or other machines) comprising: one or more processors; and
a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being configured to monitor self-efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self-efficacies falling below a threshold self-efficacy, a first node among the group of nodes being configured to determine its self-efficacy by determining a degree to which outputs of the first node affect inputs to the first node (e.g., as direct inputs for first node to perform the function of the group of nodes, as observational inputs of outputs from other nodes in performing the function of the group, or any suitable combination thereof); and providing, to the first node, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self- efficacy of the first node.
[0085] A seventeenth example provides a system according to the sixteenth example, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0086] An eighteenth example provides a system according to the sixteenth example or the seventeenth example, wherein: the providing of the signal to the first node includes:
accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0087] A nineteenth example provides a system according to any of the sixteenth through eighteenth examples, wherein: a second node among the group of nodes is configured to provide feedback to the first node; the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
[0088] A twentieth example provides a system according to any of the sixteenth through nineteenth examples, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
[0089] A twenty-first example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples.
Claims
1. A method comprising: accessing, by one or more processors, a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being configured to monitor self-efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self- efficacies falling below a threshold self-efficacy, a first node among the group of nodes being configured to determine its self- efficacy by determining a degree to which outputs of the first node affect inputs to the first node; and providing, to the first node and by the one or more processors, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self- efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self- efficacy of the first node.
2. The method of claim 1, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
3. The method of claim 1, wherein: the providing of the signal to the first node includes: accessing a second node within the group of nodes; and configuring the second node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
4. The method of claim 1, wherein: the providing of the signal to the first node includes: accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
5. The method of claim 1, wherein: the providing of the signal to the first node includes: receiving feedback from a device, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased; and providing the feedback received from the device to the first node.
6. The method of claim 1, wherein: a second node among the group of nodes is configured to provide feedback to the first node; the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
7. The method of claim 1, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
8. The method of claim 1, wherein: within the machine learning model, the group of nodes is configured to update the function of the group of nodes based on the monitored self-efficacies of the nodes in the group of nodes.
9. The method of claim 1, wherein: the providing of the signal to the first node provides an inaccurate signal that indicates that an inaccurate degree to which outputs of the first node affect inputs to the first node is increased, the provided inaccurate signal causing the first node to incorrectly determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the incorrect self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the incorrect self-efficacy of the first node.
10. A non-transitory machine-readable storage medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being configured to monitor self- efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self-efficacies falling below a threshold self-efficacy, a first node among the
group of nodes being configured to determine its self-efficacy by determining a degree to which outputs of the first node affect inputs to the first node; and providing, to the first node, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self-efficacy of the first node.
11. The non-transitory machine-readable storage medium of claim 10, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
12. The non-transitory machine-readable storage medium of claim 10, wherein: the providing of the signal to the first node includes: accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
13. The non-transitory machine-readable storage medium of claim 10, wherein: a second node among the group of nodes is configured to provide feedback to the first node;
the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
14. The non-transitory machine-readable storage medium of claim 10, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
15. The non-transitory machine-readable storage medium of claim 10, wherein: the providing of the signal to the first node provides an inaccurate signal that indicates that an inaccurate degree to which outputs of the first node affect inputs to the first node is increased, the provided inaccurate signal causing the first node to incorrectly determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the incorrect self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the incorrect self-efficacy of the first node.
16. A system comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing a machine learning model that includes a group of nodes configured to perform a function of the group of nodes, the machine learning model being configured to monitor self-
efficacies of nodes in the group of nodes and delete one or more nodes from the group of nodes based on their self-efficacies falling below a threshold self-efficacy, a first node among the group of nodes being configured to determine its self-efficacy by determining a degree to which outputs of the first node affect inputs to the first node; and providing, to the first node, a signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased, the provided signal causing the first node to determine that its self-efficacy is increased, the first node configuring itself to differently perform the function of the group of nodes based on the increased self-efficacy of the first node, the machine learning model forbearing from deleting the first node in response to the increased self-efficacy of the first node.
17. The system of claim 16, wherein: the providing of the signal to the first node includes: causing the machine learning model to replicate the first node within the group of nodes by creating a child node based on the first node; and configuring the child node to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
18. The system of claim 16, wherein: the providing of the signal to the first node includes: accessing a device communicatively coupled to the machine learning model; and causing the device to provide feedback to the first node, the feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
19. The system of claim 16, wherein: a second node among the group of nodes is configured to provide feedback to the first node; the first node is configured to determine its self-efficacy based on the feedback provided by the second node; and the providing of the signal to the first node provides further feedback to the first node, the further feedback including the signal that indicates that the degree to which outputs of the first node affect inputs to the first node is increased.
20. The system of claim 16, wherein: the providing of the signal to the first node occurs at an elapsed time since the machine learning model monitored the self-efficacy of the first node; and the first node configures itself to differently perform the function of the group of nodes based on the elapsed time since the machine learning model monitored the self-efficacy of the first node.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2023/034436 WO2025075613A1 (en) | 2023-10-04 | 2023-10-04 | Modifying self-efficacy in artificial intelligence nodes |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/US2023/034436 WO2025075613A1 (en) | 2023-10-04 | 2023-10-04 | Modifying self-efficacy in artificial intelligence nodes |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025075613A1 true WO2025075613A1 (en) | 2025-04-10 |
Family
ID=95283759
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2023/034436 Pending WO2025075613A1 (en) | 2023-10-04 | 2023-10-04 | Modifying self-efficacy in artificial intelligence nodes |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025075613A1 (en) |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200349446A1 (en) * | 2018-01-30 | 2020-11-05 | D5Ai Llc | Training a nodal network so that a first node has a high magnitude correlation with the partial derivative of the objective with respect to a second node |
| WO2023020355A1 (en) * | 2021-08-20 | 2023-02-23 | 华为云计算技术有限公司 | Distributed training method for ai model and related device |
| US20230185652A1 (en) * | 2021-12-11 | 2023-06-15 | Adapdix Corporation | Real-time self-adaptive tuning and control of a device using machine learning |
| US20230237366A1 (en) * | 2022-01-25 | 2023-07-27 | Accenture Global Solutions Limited | Scalable and adaptive self-healing based architecture for automated observability of machine learning models |
-
2023
- 2023-10-04 WO PCT/US2023/034436 patent/WO2025075613A1/en active Pending
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20200349446A1 (en) * | 2018-01-30 | 2020-11-05 | D5Ai Llc | Training a nodal network so that a first node has a high magnitude correlation with the partial derivative of the objective with respect to a second node |
| WO2023020355A1 (en) * | 2021-08-20 | 2023-02-23 | 华为云计算技术有限公司 | Distributed training method for ai model and related device |
| US20230185652A1 (en) * | 2021-12-11 | 2023-06-15 | Adapdix Corporation | Real-time self-adaptive tuning and control of a device using machine learning |
| US20230237366A1 (en) * | 2022-01-25 | 2023-07-27 | Accenture Global Solutions Limited | Scalable and adaptive self-healing based architecture for automated observability of machine learning models |
Non-Patent Citations (1)
| Title |
|---|
| YUNMIN KIM;TAE-JIN LEE: "Learning nodes: machine learning-based energy and data management strategy", EURASIP JOURNAL ON WIRELESS COMMUNICATIONS AND NETWORKING, vol. 2021, no. 1, 15 September 2021 (2021-09-15), London, UK , pages 1 - 16, XP021296249, DOI: 10.1186/s13638-021-02047-6 * |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN112579897B (en) | Information searching method and device | |
| US20160183064A1 (en) | Contextually aware dynamic group formation | |
| US20240177028A1 (en) | System and method for executing multiple inference models using inference model prioritization | |
| CN114924862A (en) | Task processing method, device and medium implemented by integer programming solver | |
| EP2980701B1 (en) | Stream processing with context data affinity | |
| US20150161192A1 (en) | Identifying versions of an asset that match a search | |
| WO2025075613A1 (en) | Modifying self-efficacy in artificial intelligence nodes | |
| US20240177179A1 (en) | System and method for management of inference models of varying complexity | |
| US10616291B2 (en) | Response caching | |
| CN111563591B (en) | Super network training method and device | |
| US12411710B2 (en) | System and method for inference generation through dynamic reassignment of inference model portions | |
| CN111582480A (en) | Method and device for pruning a model | |
| US20240177023A1 (en) | System and method for managing inference model distributions in dynamic system | |
| US20230205619A1 (en) | Common platform for fulfilling different actions | |
| US20230112346A1 (en) | System and method for advanced detection of potential system impairment | |
| US20220391223A1 (en) | Adding expressiveness to plugin extensions using integration with operators | |
| US12222947B1 (en) | Partial database update based on lightweight join | |
| WO2025116903A1 (en) | Inter-nodal sharing of generative models | |
| WO2025234985A1 (en) | Node management based on computational cost | |
| WO2025234984A1 (en) | Nodal self-determination based on computational cost | |
| US20250265466A1 (en) | Adaptive management of data source unavailability for an inference model | |
| US20240177027A1 (en) | System and method for managing inference model performance through proactive communication system analysis | |
| US20250265480A1 (en) | Managing data source unavailability for an inference model | |
| US12074781B2 (en) | Automated testing of a data service | |
| US20240177026A1 (en) | System and method for managing inference model performance through inference generation path restructuring |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23954928 Country of ref document: EP Kind code of ref document: A1 |