EP4602523A1 - Système, procédé et produit programme d'ordinateur pour générer un modèle d'apprentissage automatique sur la base de noeuds d'anomalie d'un graphe - Google Patents
Système, procédé et produit programme d'ordinateur pour générer un modèle d'apprentissage automatique sur la base de noeuds d'anomalie d'un grapheInfo
- Publication number
- EP4602523A1 EP4602523A1 EP23877991.2A EP23877991A EP4602523A1 EP 4602523 A1 EP4602523 A1 EP 4602523A1 EP 23877991 A EP23877991 A EP 23877991A EP 4602523 A1 EP4602523 A1 EP 4602523A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- nodes
- node
- anomaly
- labeled
- embedding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7753—Incorporation of unlabelled data, e.g. multiple instance learning [MIL]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/243—Classification techniques relating to the number of classes
- G06F18/2433—Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/042—Knowledge-based neural networks; Logical representations of neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/0895—Weakly supervised learning, e.g. semi-supervised or self-supervised learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Definitions
- This disclosure relates generally to graph neural networks and associated graph data and, in some particular embodiments or aspects, to methods, systems, and computer program products for generating a machine learning model based on anomaly nodes of a graph.
- Some machine learning models may receive an input dataset including data points for training. Each data point in the training dataset may have a different effect on a neural network (e.g., a trained neural network) generated based on training the neural network after the neural network is trained.
- input datasets designed for neural networks may be independent and identically distributed. Input datasets that are independent and identically distributed may be used to determine an effect (e.g., an influence) of each data point of the input dataset.
- Graph neural networks are designed to receive graph data (e.g., graph data representing graphs) and the graph data may include nodes and edges.
- a GNN may include graph embeddings (e.g., node data embeddings regarding a graph, edge data embeddings regarding a graph, etc.) that provide low-dimensional feature vector representations of nodes in the GNN such that some property of the GNN is preserved.
- a GNN may be used to determine relationships (e.g., hidden relationships) among entities.
- datasets of graph data for training machine learning models may lack a sufficient number of labeled and/or unlabeled examples in order to properly train the machine models to make efficient and accurate determinations.
- a system for generating a machine learning model based on anomaly nodes of a graph comprises at least one processor programmed or configured to: receive a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes; for each labeled anomaly node of the set of labeled anomaly nodes: randomly sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes; generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, wherein, when generating the plurality of new nodes, the at least one processor is programmed or configured to: generate a label for each new node, wherein the label is associated with a score indicating how closely the new node represents an abnormal node; combine the plurality of new nodes with the set of labeled
- a computer- implemented method for generating a machine learning model based on anomaly nodes of a graph comprises receiving, with at least one processor, a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes; for each labeled anomaly node of the set of labeled anomaly nodes: randomly sampling, with at least one processor, a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes; generating, with at least one processor, a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, wherein generating the plurality of new nodes comprises: generating a label for each new node, wherein the label is associated with a score indicating how closely the new node represents an abnormal node; combining, with at least one processor, the plurality
- a computer program product for generating a machine learning model based on anomaly nodes of a graph comprises at least one non-transitory computer-readable medium comprising one or more instructions that, when executed by at least one processor, cause the at least one processor to: receive a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes; for each labeled anomaly node of the set of labeled anomaly nodes: randomly sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes; generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, wherein, when generating the plurality of new nodes, the at least one processor is programmed or configured to: generate a label for each new node, wherein the label is associated with a score indicating
- a system comprising: at least one processor programmed or configured to: receive a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes; for each labeled anomaly node of the set of labeled anomaly nodes: randomly sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes; generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, wherein, when generating the plurality of new nodes, the at least one processor is programmed or configured to: generate a label for each new node, wherein the label is associated with a score indicating how closely the new node represents an abnormal node; combine the plurality of new nodes with the set of labeled anomaly nodes to provide a combined set of labeled anomaly nodes; and train a machine
- Clause 2 The system of clause 1 , wherein, when generating the label for each new node, the at least one processor is programmed or configured to: calculate a coefficient for a first new node of the plurality of new nodes based on an embedding of a first labeled anomaly node of the set of labeled anomaly nodes and an embedding of a first randomly sampled node of the set of randomly sampled nodes; and multiply a predetermined score for an anomaly node of the set of labeled anomaly nodes by the coefficient for the first new node to provide a score for the first new node.
- Clause 3 The system of clause 1 or 2, wherein, when calculating the coefficient for the first new node of the plurality of new nodes, the at least one processor is programmed or configured to: calculate a result of a sigmoid function with the embedding of the first labeled anomaly node and the embedding of the first randomly sampled node as inputs; and calculate the coefficient for the first new node based on the result of the sigmoid function.
- Clause 4 The system of any of clauses 1 -3, wherein, when randomly sampling a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide the set of randomly sampled nodes, the at least one processor is programmed or configured to: for each labeled anomaly node of the set of labeled anomaly nodes: sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes according to a Metropolis-Hasting sampling algorithm to provide a set of randomly sampled nodes.
- Clause 5 The system of any of clauses 1 -4, wherein the machine learning model is a graph neural network (GNN) machine learning model configured to provide an output that includes a prediction regarding whether a node of a graph is an anomaly.
- the at least one processor is programmed or configured to: train the machine learning model based on: a first distance associated with an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, wherein the first distance comprises: a distance between an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes and the center of the combined set of labeled anomaly nodes in the embedding space, and a second distance associated with an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, wherein the second distance comprises: a distance of an embedding of each labele
- Clause 7 The system of any of clauses 1 -6, wherein the at least one processor is further programmed or configured to: calculate the center of the combined set of labeled anomaly nodes in the embedding space based on an aggregation of embeddings of each labeled anomaly node in the combined set of labeled anomaly nodes; and calculate the center of the set of normal nodes in the embedding space based on an aggregation of embeddings of each node in the set of randomly sampled nodes.
- a computer-implemented method comprising: receiving, with at least one processor, a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes; for each labeled anomaly node of the set of labeled anomaly nodes: randomly sampling, with at least one processor, a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes; generating, with at least one processor, a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, wherein generating the plurality of new nodes comprises: generating a label for each new node, wherein the label is associated with a score indicating how closely the new node represents an abnormal node; combining, with at least one processor, the plurality of new nodes with the set of labeled anomaly nodes to provide a combined set of label
- Clause 9 The computer-implemented method of clause 8, whereingenerating the label for each new node comprises: calculating ac oefficient for a firstnew node of the plurality of new nodes based on an embedding of a first labeledanomaly node of the set of labeled anomaly nodes and an embedding of a firstrandomly sampled node of the set of randomly sampled nodes; and multiplying apredetermined score for an anomaly node of the set of labeled anomaly nodes by thecoefficient for the first new node to provide a score for the first new node.
- Clause 10 The computer-implemented method of clause 8 or 9, whereincalculating the coefficient for the first new node of the plurality of new nodescomprises: calculating a result of a sigmoid function with the embedding of the firstlabeled anomaly node and the embedding of the first randomly sampled node asinputs; and calculating the coefficient for the first new node based on the result of thesigmoidfunction.
- Clause 11 The computer-implemented method of any of clauses 8-10,wherein randomly sampling a node from among the set of unlabeled anomalynodes and the set of normal nodes to provide the set of randomly sampled nodes comprises:for each labeled anomaly node of the set of labeled anomaly nodes: sampling a nodefrom among the set of unlabeled anomaly nodes and the set of normal nodesaccording to a Metropolis-Hasting sampling algorithm to provide a set of randomlysampled nodes.
- training the machine learning model comprises: training the machine learningmodel based on: a first distance associated with an embedding of each labeledanomaly node in the combined set of labeled anomaly nodes, wherein the firstdistance comprises: a distance between an embedding of each labeled anomaly nodein the combined set of labeled anomaly nodes and the center of the combined set oflabeled anomaly nodes in the embedding space, and a second distance associatedwith an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, wherein the second distance comprises: a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center of the combined set of labeled anomaly nodes, and a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center
- Clause 16 The computer program product of clause 15, wherein, the one or more instructions that cause the at least one processor to generate the label for each new node, cause the at least one processor to: calculate a coefficient for a first new node of the plurality of new nodes based on an embedding of a first labeled anomaly node of the set of labeled anomaly nodes and an embedding of a first randomly sampled node of the set of randomly sampled nodes; and multiply a predetermined score for an anomaly node of the set of labeled anomaly nodes by the coefficient for the first new node to provide a score for the first new node.
- FIG. 1 is a diagram of a non-limiting embodiment or aspect of an environment in which methods, systems, and/or computer program products, described herein, may be implemented according to the principles of the presently disclosed subject matter;
- FIG. 3 is a flowchart of a non-limiting embodiment or aspect of a process for generating a machine learning model based on anomaly nodes of a graph; and [0036] FIGS. 4A-4F are diagrams of non-limiting embodiments or aspects of an implementation of a process for generating a machine learning model based on anomaly nodes of a graph.
- the terms “communication” and “communicate” may refer to the reception, receipt, transmission, transfer, provision, and/or the like of information (e.g., data, signals, messages, instructions, commands, and/or the like).
- one unit e.g., a device, a system, a component of a device or system, combinations thereof, and/or the like
- to be in communication with another unit means that the one unit is able to directly or indirectly receive information from and/or transmit information to the other unit.
- This may refer to a direct or indirect connection (e.g., a direct communication connection, an indirect communication connection, and/or the like) that is wired and/or wireless in nature.
- two units may be in communication with each other even though the information transmitted may be modified, processed, relayed, and/or routed between the first and second unit.
- a first unit may be in communication with a second unit even though the first unit passively receives information and does not actively transmit information to the second unit.
- a first unit may be in communication with a second unit if at least one intermediary unit (e.g., a third unit located between the first unit and the second unit) processes information received from the first unit and communicates the processed information to the second unit.
- a message may refer to a network packet (e.g., a data packet and/or the like) that includes data. It will be appreciated that numerous other arrangements are possible.
- issuer institution may refer to one or more entities that provide accounts to customers for conducting transactions (e.g., payment transactions), such as initiating credit and/or debit payments.
- issuer institution may provide an account identifier, such as a primary account number (PAN), to a customer that uniquely identifies one or more accounts associated with that customer.
- PAN primary account number
- the account identifier may be embodied on a portable financial device, such as a physical financial instrument, e.g., a payment card, and/or may be electronic and used for electronic payments.
- issuer institution and “issuer institution system” may also refer to one or more computer systems operated by or on behalf of an issuer institution, such as a server computer executing one or more software applications.
- issuer institution system may include one or more authorization servers for authorizing a transaction.
- the account identifier may be embodied on a physical financial instrument (e.g., a portable financial instrument, a payment card, a credit card, a debit card, and/or the like) and/or may be electronic information communicated to the user that the user may use for electronic payments.
- the account identifier may be an original account identifier, where the original account identifier was provided to a user at the creation of the account associated with the account identifier.
- the account identifier may be an account identifier (e.g., a supplemental account identifier) that is provided to a user after the original account identifier was provided to the user.
- an account identifier may be directly or indirectly associated with an issuer institution such that an account identifier may be a payment token that maps to a PAN or other type of identifier.
- Account identifiers may be alphanumeric, any combination of characters and/or symbols, and/or the like.
- An issuer institution may be associated with a bank identification number (BIN) that uniquely identifies the issuer institution.
- BIN bank identification number
- a payment token may include a series of numeric and/or alphanumeric characters that may be used as a substitute for an original account identifier. For example, a payment token “4900 0000 0000 0001 ” may be used in place of a PAN “4147 0900 0000 1234.”
- a payment token may be “format preserving” and may have a numeric format that conforms to the account identifiers used in existing payment processing networks (e.g., ISO 8583 financial transaction message format).
- a payment token may be used in place of a PAN to initiate, authorize, settle, or resolve a payment transaction or represent the original credential in other systems where the original credential would typically be provided.
- a token value may be generated such that the recovery of the original PAN or other account identifier from the token value may not be computationally derived (e.g., with a one-way hash or other cryptographic function).
- the token format may be configured to allow the entity receiving the payment token to identify it as a payment token and recognize the entity that issued the token.
- a requestor may request registration with a network token system, request token generation, token activation, token de-activation, token exchange, other token lifecycle management related processes, and/or any other token related processes.
- a requestor may interface with a network token system through any suitable communication network and/or protocol (e.g., using HTTPS, SOAP, and/or an XML interface among others).
- a token requestor may include card-on-file merchants, acquirers, acquirer processors, payment gateways acting on behalf of merchants, payment enablers (e.g., original equipment manufacturers, mobile network operators, and/or the like), digital wallet providers, issuers, third-party wallet providers, payment processing networks, and/or the like.
- a token requestor may request tokens for multiple domains and/or channels. Additionally or alternatively, a token requestor may be registered and identified uniquely by the token service provider within the tokenization ecosystem. For example, during token requestor registration, the token service provider may formally process a token requestor's application to participate in the token service system. In some non-limiting embodiments or aspects, the token service provider may collect information pertaining to the nature of the requestor and relevant use of tokens to validate and formally approve the token requestor and establish appropriate domain restriction controls. Additionally or alternatively, successfully registered token requestors may be assigned a token requestor identifier that may also be entered and maintained within the token vault.
- token requestor identifiers may be revoked and/or token requestors may be assigned new token requestor identifiers. In some non-limiting embodiments or aspects, this information may be subject to reporting and audit by the token service provider.
- token service provider may refer to an entity including one or more server computers in a token service system that generates, processes and maintains payment tokens.
- the token service provider may include or be in communication with a token vault where the generated tokens are stored. Additionally or alternatively, the token vault may maintain one-to-one mapping between a token and a PAN represented by the token.
- the token service provider may have the ability to set aside licensed BINs as token BINs to issue tokens for the PANs that may be submitted to the token service provider.
- various entities of a tokenization ecosystem may assume the roles of the token service provider.
- payment networks and issuers or their agents may become the token service provider by implementing the token services according to nonlimiting embodiments or aspects of the presently disclosed subject matter.
- a token service provider may provide reports or data output to reporting tools regarding approved, pending, or declined token requests, including any assigned token requestor ID.
- the token service provider may provide data output related to token-based transactions to reporting tools and applications and present the token and/or PAN as appropriate in the reporting output.
- the EMVCo standards organization may publish specifications defining how tokenized systems may operate. For example, such specifications may be informative, but they are not intended to be limiting upon any of the presently disclosed subject matter.
- POS device may refer to one or more devices, which may be used by a merchant to initiate transactions (e.g., a payment transaction), engage in transactions, and/or process transactions.
- a POS device may include one or more computers, peripheral devices, card readers, near-field communication (NFC) receivers, radio frequency identification (RFID) receivers, and/or other contactless transceivers or receivers, contact-based receivers, payment terminals, computers, servers, input devices, and/or the like.
- NFC near-field communication
- RFID radio frequency identification
- POS system may refer to one or more computers and/or peripheral devices used by a merchant to conduct a transaction.
- a POS system may include one or more POS devices and/or other like devices that may be used to conduct a payment transaction.
- a POS system e.g., a merchant POS system
- the term “transaction service provider” may refer to an entity that receives transaction authorization requests from merchants or other entities and provides guarantees of payment, in some cases through an agreement between the transaction service provider and the issuer institution.
- a transaction service provider may include a credit card company, a debit card company, and/or the like.
- the term “transaction service provider system” may also refer to one or more computer systems operated by or on behalf of a transaction service provider, such as a transaction processing server executing one or more software applications.
- a transaction processing server may include one or more processors and, in some non-limiting embodiments or aspects, may be operated by or on behalf of a transaction service provider.
- the term “acquirer” may refer to an entity licensed by the transaction service provider and approved by the transaction service provider to originate transactions (e.g., payment transactions) using a portable financial device associated with the transaction service provider.
- the term “acquirer system” may also refer to one or more computer systems, computer devices, and/or the like operated by or on behalf of an acquirer.
- the transactions may include payment transactions (e.g., purchases, original credit transactions (OCTs), account funding transactions (AFTs), and/or the like).
- the acquirer may be authorized by the transaction service provider to assign merchant or service providers to originate transactions using a portable financial device of the transaction service provider.
- the acquirer may contract with payment facilitators to enable the payment facilitators to sponsor merchants.
- the acquirer may monitor compliance of the payment facilitators in accordance with regulations of the transaction service provider.
- the acquirer may conduct due diligence of the payment facilitators and ensure that proper due diligence occurs before signing a sponsored merchant.
- the acquirer may be liable for all transaction service provider programs that the acquirer operates or sponsors.
- the acquirer may be responsible for the acts of the acquirer's payment facilitators, merchants that are sponsored by an acquirer's payment facilitators, and/or the like.
- an acquirer may be a financial institution, such as a bank.
- an electronic wallet may refer to one or more electronic devices and/or one or more software applications configured to initiate and/or conduct transactions (e.g., payment transactions, electronic payment transactions, and/or the like).
- an electronic wallet may include a user device (e.g., a mobile device) executing an application program and server-side software and/or databases for maintaining and providing transaction data to the user device.
- the term “electronic wallet provider” may include an entity that provides and/or maintains an electronic wallet and/or an electronic wallet mobile application for a user (e.g., a customer).
- an electronic wallet provider examples include, but are not limited to, Google Pay®, Android Pay®, Apple Pay®, and Samsung Pay®.
- a financial institution e.g., an issuer institution
- the term “electronic wallet provider system” may refer to one or more computer systems, computer devices, servers, groups of servers, and/or the like operated by or on behalf of an electronic wallet provider.
- the portable financial device may include volatile or non-volatile memory to store information (e.g., an account identifier, a name of the account holder, and/or the like).
- the term “payment gateway” may refer to an entity and/or a payment processing system operated by or on behalf of such an entity (e.g., a merchant service provider, a payment service provider, a payment facilitator, a payment facilitator that contracts with an acquirer, a payment aggregator, and/or the like), which provides payments e.g., transaction service provider payment services, payment processing services, and/or the like) to one or more merchants.
- the payment services may be associated with the use of portable financial devices managed by a transaction service provider.
- the term “payment gateway system” may refer to one or more computer systems, computer devices, servers, groups of servers, and/or the like operated by or on behalf of a payment gateway and/or to a payment gateway itself.
- the term “payment gateway mobile application” may refer to one or more electronic devices and/or one or more software applications configured to provide payment services for transactions (e.g., payment transactions, electronic payment transactions, and/or the like).
- client device may refer to one or more client-side devices or systems (e.g., remote from a transaction service provider) used to initiate or facilitate a transaction (e.g., a payment transaction).
- client device may refer to one or more POS devices used by a merchant, one or more acquirer host computers used by an acquirer, one or more mobile devices used by a user, and/or the like.
- a client device may be an electronic device configured to communicate with one or more networks and initiate or facilitate transactions.
- a client device may include one or more computers, portable computers, laptop computers, tablet computers, mobile devices, cellular phones, wearable devices (e.g., watches, glasses, lenses, clothing, and/or the like), PDAs, and/or the like.
- a “client” may also refer to an entity (e.g., a merchant, an acquirer, and/or the like) that owns, utilizes, and/or operates a client device for initiating transactions (e.g., for initiating transactions with a transaction service provider).
- the term “computing device” may refer to one or more electronic devices that are configured to directly or indirectly communicate with or over one or more networks.
- a computing device may be a mobile device, a desktop computer, and/or any other like device.
- the term “computer” may refer to any computing device that includes the necessary components to receive, process, and output data, and normally includes a display, a processor, a memory, an input device, and a network interface.
- the term “server” may refer to or include one or more processors or computers, storage devices, or similar computer arrangements that are operated by or facilitate communication and processing for multiple parties in a network environment, such as the Internet, although it will be appreciated that communication may be facilitated over one or more public or private network environments and that various other arrangements are possible.
- multiple computers, e.g., servers, or other computerized devices, such as POS devices, directly or indirectly communicating in the network environment may constitute a “system,” such as a merchant's POS system.
- system may refer to one or more computing devices or combinations of computing devices (e.g., processors, servers, client devices, software applications, components of such, and/or the like).
- references to “a device,” “a server,” “a processor,” and/or the like, as used herein, may refer to a previously-recited device, server, or processor that is recited as performing a previous step or function, a different server or processor, and/or a combination of servers and/or processors.
- a first server or a first processor that is recited as performing a first step or a first function may refer to the same or different server or the same or different processor recited as performing a second step or a second function.
- Non-limiting embodiments or aspects of the disclosed subject matter are directed to methods, systems, and computer program products for generating a machine learning model based on anomaly nodes of a graph.
- a graph learning system may include at least one processor programmed or configured to receive a dataset comprising a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes, for each labeled anomaly node of the set of labeled anomaly nodes: randomly sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes, generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes, where, when generating the plurality of new nodes, the at least one processor is programmed or configured to generate a label for each new node, wherein the label is associated with a score indicating how closely the new no
- the at least one processor when generating the label for each new node, is programmed or configured to calculate a coefficient for a first new node of the plurality of new nodes based on an embedding of a first labeled anomaly node of the set of labeled anomaly nodes and an embedding of a first randomly sampled node of the set of randomly sampled nodes and multiply a predetermined score for an anomaly node of the set of labeled anomaly nodes by the coefficient for the first new node to provide a score for the first new node.
- the at least one processor when calculating the coefficient for the first new node of the plurality of new nodes, is programmed or configured to calculate a result of a sigmoid function with the embedding of the first labeled anomaly node and the embedding of the first randomly sampled node as inputs and calculate the coefficient for the first new node based on the result of the sigmoid function.
- the at least one processor when randomly sampling a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide the set of randomly sampled nodes, is programmed or configured to, for each labeled anomaly node of the set of labeled anomaly nodes, sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes according to a Metropolis-Hasting sampling algorithm to provide a set of randomly sampled nodes.
- the machine learning model is graph neural network machine learning model configured to provide an output that includes a prediction regarding whether a node of a graph is an anomaly.
- the at least one processor when training the machine learning model, is programmed or configured to train the machine learning model based on a first distance associated with an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, where the first distance comprises a distance between an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes and the center of the combined set of labeled anomaly nodes in the embedding space, and a second distance associated with an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, where the second distance comprises a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center of the combined set of labeled anomaly nodes, and a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center of the set of normal nodes in the
- the at least one processor is further programmed or configured to calculate the center of the combined set of labeled anomaly nodes in the embedding space based on an aggregation of embeddings of each labeled anomaly node in the combined set of labeled anomaly nodes and calculate the center of the set of normal nodes in the embedding space based on an aggregation of embeddings of each node in the set of randomly sampled nodes.
- satisfying a threshold may refer to a value being greater than the threshold, more than the threshold, higher than the threshold, greater than or equal to the threshold, less than the threshold, fewer than the threshold, lower than the threshold, less than or equal to the threshold, equal to the threshold, etc.
- FIG. 1 is a diagram of a non-limiting embodiment or aspect of an environment 100 in which systems, methods, and/or computer program products, as described herein, may be implemented.
- environment 100 includes graph learning system 102, data source 102a, transaction service provider system 104, issuer system 106, user device 108, and communication network 1 10.
- Graph learning system 102, transaction service provider system 104, issuer system 106, and/or user device 108 may interconnect (e.g., establish a connection to communicate) via wired connections, wireless connections, or a combination of wired and wireless connections.
- Graph learning system 102 may include one or more devices configured to communicate with transaction service provider system 104 and/or user device 106 via communication network 1 10.
- graph learning system 102 may include a server, a group of servers, and/or other like devices.
- graph learning system 102 may be associated with a transaction service provider system.
- graph learning system 102 may be operated by the transaction service provider system.
- graph learning system 102 may be a component of transaction service provider system 104.
- graph learning system 102 may be in communication with data source 102a (e.g., a data storage device), which may be local or remote to graph learning system 102.
- graph learning system 102 may be capable of receiving information from, storing information in, transmitting information to, and/or searching information stored in data source 102a.
- Transaction service provider system 104 may include one or more devices configured to communicate with graph learning system 102, issuer system 106, and/or user device 108 via communication network 1 10.
- transaction service provider system 104 may include a computing device, such as a server, a group of servers, and/or other like devices.
- transaction service provider system 104 may be associated with a transaction service provider.
- Issuer system 106 may include one or more devices configured to communicate with graph learning system 102, transaction service provider system 104, and/or user device 108 via communication network 1 10.
- issuer system 106 may include a computing device, such as a server, a group of servers, and/or other like devices.
- issuer system 106 may be associated with a transaction service provider system.
- User device 108 may include a computing device configured to communicate with graph learning system 102, transaction service provider system 104, and/or issuer system 106 via communication network 1 10.
- user device 108 may include a computing device, such as a desktop computer, a portable computer (e.g., tablet computer, a laptop computer, and/or the like), a mobile device (e.g., a cellular phone, a smartphone, a personal digital assistant, a wearable device, and/or the like), and/or other like devices.
- user device 108 may be associated with a user (e.g., an individual operating user device 108).
- Communication network 1 10 may include one or more wired and/or wireless networks.
- communication network 1 10 may include a cellular network (e.g., a long-term evolution (LTE) network, a third generation (3G) network, a fourth generation (4G) network, a fifth generation (5G) network, a code division multiple access (CDMA) network, and/or the like), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the public switched telephone network (PSTN)), a private network (e.g., a private network associated with a transaction service provider), an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks.
- LTE long-term evolution
- 3G third generation
- 4G fourth generation
- 5G fifth generation
- CDMA code division multiple access
- FIG. 1 The number and arrangement of systems, devices, and/or networks shown in FIG. 1 are provided as an example. There may be additional systems, devices, and/or networks; fewer systems, devices, and/or networks; different systems, devices, and/or networks; and/or differently arranged systems, devices, and/or networks than those shown in FIG. 1 . Furthermore, two or more systems or devices shown in FIG. 1 may be implemented within a single system or device, or a single system or device shown in FIG. 1 may be implemented as multiple, distributed systems or devices.
- FIG. 2 is a diagram of example components of a device 200.
- Device 200 may correspond to one or more devices of graph learning system 102 (e.g., one or more devices of graph learning system 102), transaction service provider system 104 (e.g., one or more devices of transaction service provider system 104), issuer system 106, and/or user device 108.
- graph learning system 102, transaction service provider system 104, issuer system 106, and/or user device 108 may include at least one device 200 and/or at least one component of device 200.
- device 200 may include bus 202, processor 204, memory 206, storage component 208, input component 210, output component 212, and communication interface 214.
- Bus 202 may include a component that permits communication among the components of device 200.
- processor 204 may be implemented in hardware, software, firmware, and/or any combination thereof.
- processor 204 may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), and/or the like), a microprocessor, a digital signal processor (DSP), and/or any processing component (e.g., a field-programmable gate array (FPGA), an application-specific integrated circuit (ASIC), and/or the like), and/or the like, which can be programmed to perform a function.
- Memory 206 may include random access memory (RAM), read-only memory (ROM), and/or another type of dynamic or static storage device (e.g., flash memory, magnetic memory, optical memory, and/or the like) that stores information and/or instructions for use by processor 204.
- RAM random access memory
- ROM read-only memory
- static storage device e.g., flash memory, magnetic memory, optical memory, and/or the like
- Storage component 208 may store information and/or software related to the operation and use of device 200.
- storage component 208 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, a solid state disk, and/or the like), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of computer-readable medium, along with a corresponding drive.
- Input component 210 may include a component that permits device 200 to receive information, such as via user input (e.g., a touch screen display, a keyboard, a keypad, a mouse, a button, a switch, a microphone, a camera, and/or the like). Additionally or alternatively, input component 210 may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, an actuator, and/or the like). Output component 212 may include a component that provides output information from device 200 (e.g., a display, a speaker, one or more light-emitting diodes (LEDs), and/or the like).
- GPS global positioning system
- LEDs light-emitting diodes
- Communication interface 214 may include a transceiver-like component (e.g., a transceiver, a receiver and transmitter that are separate, and/or the like) that enables device 200 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections.
- Communication interface 214 may permit device 200 to receive information from another device and/or provide information to another device.
- communication interface 214 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi® interface, a Bluetooth® interface, a Zigbee® interface, a cellular network interface, and/or the like.
- RF radio frequency
- USB universal serial bus
- Software instructions may be read into memory 206 and/or storage component 208 from another computer-readable medium or from another device via communication interface 214. When executed, software instructions stored in memory 206 and/or storage component 208 may cause processor 204 to perform one or more processes described herein. Additionally or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, embodiments or aspects described herein are not limited to any specific combination of hardware circuitry and software.
- FIG. 3 is a flowchart of a non-limiting embodiment or aspect of a process 300 for generating a machine learning model based on anomaly nodes of a graph.
- one or more of the steps of process 300 may be performed (e.g., completely, partially, etc.) by graph learning system 102 (e.g., one or more devices of graph learning system 102).
- one or more of the steps of process 300 may be performed (e.g., completely, partially, etc.) by another device or a group of devices separate from or including graph learning system 102 (e.g., one or more devices of graph learning system 102), transaction service provider system 104 (e.g., one or more devices of transaction service provider system 104), issuer system 106, and/or user device 108.
- graph learning system 102 e.g., one or more devices of graph learning system 102
- transaction service provider system 104 e.g., one or more devices of transaction service provider system 104
- issuer system 106 e.g., issuer system 106
- user device 108 e.g., user device 108.
- graph learning system 102 may receive a dataset including a set of labeled anomaly nodes, a set of unlabeled anomaly nodes, and a set of normal nodes.
- graph learning system 102 may receive the dataset from data source 102a, transaction service provider system 104, issuer system 106, and/or user device 108.
- a number of nodes of the set of labeled anomaly nodes may be less than a number of nodes of the set of unlabeled anomaly nodes and/or a number of nodes of the set of normal nodes.
- the dataset may include graph data associated with a graph.
- the graph may include a plurality of nodes and a plurality of edges
- the graph data may include a plurality of node embeddings associated with a number of nodes in the graph and node data associated with each node of the graph.
- the node data may include data associated with parameters of each node in the graph.
- the node data may include user data associated with a plurality of users and/or entity data associated with a plurality of entities.
- the plurality of node embeddings may include a first set of node embeddings and/or a second set of node embeddings. The first set of node embeddings may be based on the user data and/or the second set of node embeddings may be based on the entity data.
- the node data may include data associated with parameters of each node in the graph.
- the dataset may be associated with a population of entities (e.g., users, accountholders, merchants, issuers, etc.) that includes a plurality of data instances associated with a plurality of features.
- the plurality of data instances may represent a plurality of transactions (e.g., electronic payment transactions) conducted by the population.
- the plurality of transaction parameters may include electronic wallet card data associated with an electronic card (e.g., an electronic credit card, an electronic debit card, an electronic loyalty card, and/or the like), decision data associated with a decision (e.g., a decision to approve or deny a transaction authorization request), authorization data associated with an authorization response (e.g., an approved spending limit, an approved transaction value, and/or the like), a PAN, an authorization code (e.g., a personal identification number (PIN), etc.), data associated with a transaction amount (e.g., an approved limit, a transaction value, etc.), data associated with a transaction date and time, data associated with a conversion rate of a currency, data associated with a merchant type (e.g., a merchant category code that indicates a type of goods, such as grocery, fuel, and/or the like), data associated with an acquiring institution country, data associated with an identifier of a country associated with the PAN, data associated with a response code,
- attribute for each node of the plurality of nodes may be represented by the following matrix, where Xi represents an attribute vector for node vi:
- graph learning system 102 may, for each labeled anomaly node of the set of labeled anomaly nodes, sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes according to a Metropolis-Hasting sampling algorithm to provide a set of randomly sampled nodes. For example, graph learning system 102 may determine a probability of sampling a node, represented by u, for a labeled node, represented by v, based on the following equation:
- u x represents a randomly selected node
- u y represents a same of the randomly selected node u x
- t e [0,1] represents a randomly generated number uniformity
- ge(.) represents a graph encoder
- V L represents the set of labeled anomaly nodes
- q represents a proposal distribution
- N represents a step size
- V s represents the set of randomly sampled nodes.
- process 300 includes generating a plurality of new nodes.
- graph learning system 102 may generate a plurality of new nodes (e.g., a set of 5, 10, 15, 30, 50, 100, 200, 300 or more, new nodes).
- graph learning system 102 may generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes.
- graph learning system 102 may generate a label for each new node. The label may be associated with a score indicating how closely the new node represents an abnormal node.
- graph learning system 102 may calculate a coefficient for a first new node of the plurality of new nodes based on an embedding of a first labeled anomaly node of the set of labeled anomaly nodes and an embedding of a first randomly sampled node of the set of randomly sampled nodes. In some non-limiting embodiments or aspects, when generating the label for each new node, graph learning system 102 may multiply a predetermined score for an anomaly node of the set of labeled anomaly nodes by the coefficient for the first new node to provide a score for the first new node.
- graph learning system 102 may calculate a result of a sigmoid function with the embedding of the first labeled anomaly node and the embedding of the first randomly sampled node as inputs. In some non-limiting embodiments or aspects, when calculating the coefficient for the first new node of the plurality of new nodes, graph learning system 102 may calculate the coefficient for the first new node based on the result of the sigmoid function.
- graph learning system 102 may modify (e.g., remove or delete) one or more nodes of the dataset (e.g., the set of labeled anomaly nodes, the set of unlabeled anomaly nodes, and the set of normal nodes), the labeled anomaly nodes, and/or the set of randomly sampled nodes, to provide the new set of new nodes.
- modify e.g., remove or delete
- one or more nodes of the dataset e.g., the set of labeled anomaly nodes, the set of unlabeled anomaly nodes, and the set of normal nodes
- the labeled anomaly nodes e.g., the set of randomly sampled nodes
- graph learning system 102 when generating the set of new nodes, may interpolate node embeddings and labels. When interpolating the node embeddings, graph learning system 102 may perform node generation process (e.g., an adaptive node mixup). For example, for a pair of training samples, (xi, yl) and (xj, yj), where x represents an input feature and y represents a label, graph learning system 102 may obtain a new node represented by based on the following, where A e [0,1] is sampled from a Beta distribution (e.g., Beta (a, a)):
- graph learning system 102 may perform a second node generation process (e.g., a second adaptive node mixup) to adaptively interpolate node features and node labels.
- graph learning system 102 may first, introduce an abnormality adapter represented by A a , with a confined value range [0.5, 1 ], which may preserve the anomalous information, S, and second, graph learning system 102 may regulate the abnormality adapter according to the learned representations of the labeled anomaly and the sampled node.
- the abnormality adapter may be calculated based on the following, where Vi is a node selected from the set of labeled anomaly nodes, where Vj is a randomly sampled node, where zi and Zj are the representations of nodes Vi and Vj from the graph neural network (GNN), respectively, and where o(.) represents the sigmoid activation functions:
- an inner product of the corresponding embeddings e.g., ZiZj
- the inner product of the corresponding embeddings is a second number smaller than the first number and A will approximate 0.5.
- graph learning system 102 may perform the second node generation process (e.g., the second adaptive node mixup) using the new abnormality adapter to computer the mixed embedding and label based on the following:
- process 300 includes combining the plurality of new nodes with the set of labeled anomaly nodes.
- graph learning system 102 may combine the plurality of new nodes with the set of labeled anomaly nodes to provide a combined set of labeled anomaly nodes (e.g., a set of 5, 10, 15, 30, 50, 100, 200, 300 or more, combined labeled anomaly nodes).
- graph learning system 102 may train a GNN based on a set of graph features to provide a trained GNN.
- graph learning system 102 may train a machine learning model based on an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, a center of the combined set of labeled anomaly nodes in an embedding space, and a center of the set of normal nodes in the embedding space.
- graph learning system 102 may calculate the center of the combined set of labeled anomaly nodes in the embedding space. For example, graph learning system 102 may calculate the center of the combined set of labeled anomaly nodes based on an aggregation of embeddings of each labeled anomaly node in the combined set of labeled anomaly nodes. In some non-limiting embodiments or aspects, graph learning system 102 may calculate the center of the set of normal nodes in the embedding space. For example, graph learning system 102 may calculate the center of the set of normal nodes in the embedding space based on an aggregation of embeddings of each node in the set of randomly sampled nodes.
- the second distance may include a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center of the combined set of labeled anomaly nodes and a distance of an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes to a center of the set of normal nodes in the embedding space.
- the trained GNN may include one or more layers (e.g., an input later, one or more hidden layers, an output layer, and/or the like).
- the one or more layers may include an intermediate layer.
- graph learning system 102 may forego performing the fraud prevention procedure associated with protection of the account of the user.
- graph learning system 102 may execute a fraud prevention procedure based on a classification of an input as provided by the machine learning model.
- one or more of the steps of the process may be performed (e.g., completely, partially, etc.) by another device or a group of devices separate from or including graph learning system 102 (e.g., one or more devices of graph learning system 102), transaction service provider system 104 (e.g., one or more devices of transaction service provider system 104), issuer system 106 (e.g., one or more devices of issuer system 106), and/or user device 108.
- graph learning system 102 e.g., one or more devices of graph learning system 102
- transaction service provider system 104 e.g., one or more devices of transaction service provider system 104
- issuer system 106 e.g., one or more devices of issuer system 106
- user device 108 e.g., one or more of issuer system 106
- graph learning system 102 may receive a dataset that includes a set of labeled anomaly nodes (e.g., a set of nodes labeled as anomalies), a set of unlabeled anomaly nodes (e.g., a set of nodes that are unlabeled but are potential anomalies), and a set of normal nodes (e.g., a set of nodes labeled as normal, such that the set of nodes are not anomalies) from data source 102a.
- a set of labeled anomaly nodes e.g., a set of nodes labeled as anomalies
- unlabeled anomaly nodes e.g., a set of nodes that are unlabeled but are potential anomalies
- normal nodes e.g., a set of nodes labeled as normal, such that the set of nodes are not anomalies
- the dataset may be associated with graph data that is with regard to a graph
- the graph data may include node data associated with a plurality of nodes of the graph and/or edge data associated with a plurality of edges of the graph.
- graph learning system 102 may randomly sample a node from among the set of unlabeled anomaly nodes and the set of normal nodes to provide a set of randomly sampled nodes for each labeled anomaly node of the set of labeled anomaly nodes.
- graph learning system 102 may generate a plurality of new nodes.
- graph learning system 102 may generate a plurality of new nodes based on the set of labeled anomaly nodes and the set of randomly sampled nodes.
- graph learning system 102 may generate an embedding of a sampled node (e.g., a labeled anomaly node, an unlabeled anomaly node, or a normal node) and determine a node that is a nearest neighbor of the sampled node.
- graph learning system 102 may compare the score for the first new node to a threshold value to determine a label for the first new node. In some non-limiting embodiments or aspects, if the score of the first new node satisfies the threshold value, graph learning system 102 may generate a first label (e.g., a positive label, such as a label that indicates a node is an anomaly) for the first new node. If the score of the first new node does not satisfy the threshold value, graph learning system 102 may generate a second label (e.g., a negative label, such as a label that indicates a node is not an anomaly) for the first new node.
- a first label e.g., a positive label, such as a label that indicates a node is an anomaly
- graph learning system 102 may generate a second label (e.g., a negative label, such as a label that indicates a node is not an anomaly) for the first new node.
- the first label is a positive label of a binary classification
- the second label is a negative label of the binary classification, or vice versa.
- graph learning system 102 may repeat the above process for each new node of the plurality of new nodes (e.g., each new node of the plurality of new nodes that does not have a label).
- graph learning system 102 may train a Graph Neural Network (GNN) machine learning model based on an embedding.
- graph learning system 102 may train a machine learning model based on an embedding of each labeled anomaly node in the combined set of labeled anomaly nodes, a center of the combined set of labeled anomaly nodes in an embedding space, and/or a center of the set of normal nodes in the embedding space.
- GNN Graph Neural Network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Multimedia (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Complex Calculations (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
La présente invention concerne des systèmes qui comprennent au moins un processeur pour recevoir un ensemble de données comprenant un ensemble de nœuds d'anomalie étiquetés et un ensemble de nœuds d'anomalie non étiquetés et un ensemble de nœuds normaux, échantillonnent un nœud de manière aléatoire pour fournir un ensemble de nœuds échantillonnés de manière aléatoire, génèrent une pluralité de nouveaux nœuds sur la base de l'ensemble de nœuds d'anomalie étiquetés et de l'ensemble de nœuds échantillonnés de manière aléatoire, combinent la pluralité de nouveaux nœuds à l'ensemble de nœuds d'anomalie étiquetés pour fournir un ensemble combiné de nœuds d'anomalie étiquetés et entraînent un modèle d'apprentissage automatique sur la base d'une incorporation de chaque nœud d'anomalie étiqueté dans l'ensemble combiné de nœuds d'anomalie étiquetés, d'un centre de l'ensemble combiné de nœuds d'anomalie étiquetés dans un espace d'incorporation et d'un centre de l'ensemble de nœuds normaux dans l'espace d'incorporation. Sont également divulgués des procédés et des produits-programmes informatiques.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202263415442P | 2022-10-12 | 2022-10-12 | |
| PCT/US2023/035007 WO2024081350A1 (fr) | 2022-10-12 | 2023-10-12 | Système, procédé et produit programme d'ordinateur pour générer un modèle d'apprentissage automatique sur la base de nœuds d'anomalie d'un graphe |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| EP4602523A1 true EP4602523A1 (fr) | 2025-08-20 |
Family
ID=90670089
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP23877991.2A Pending EP4602523A1 (fr) | 2022-10-12 | 2023-10-12 | Système, procédé et produit programme d'ordinateur pour générer un modèle d'apprentissage automatique sur la base de noeuds d'anomalie d'un graphe |
Country Status (3)
| Country | Link |
|---|---|
| EP (1) | EP4602523A1 (fr) |
| CN (1) | CN120418810A (fr) |
| WO (1) | WO2024081350A1 (fr) |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN109936525B (zh) * | 2017-12-15 | 2020-07-31 | 阿里巴巴集团控股有限公司 | 一种基于图结构模型的异常账号防控方法、装置以及设备 |
| US11606389B2 (en) * | 2019-08-29 | 2023-03-14 | Nec Corporation | Anomaly detection with graph adversarial training in computer systems |
| US11606393B2 (en) * | 2019-08-29 | 2023-03-14 | Nec Corporation | Node classification in dynamic networks using graph factorization |
| US11263644B2 (en) * | 2020-04-22 | 2022-03-01 | Actimize Ltd. | Systems and methods for detecting unauthorized or suspicious financial activity |
| EP4030351A1 (fr) * | 2021-01-18 | 2022-07-20 | Siemens Aktiengesellschaft | Dispositif industriel et procédé de construction et/ou de traitement d'un graphe de connaissances |
-
2023
- 2023-10-12 WO PCT/US2023/035007 patent/WO2024081350A1/fr not_active Ceased
- 2023-10-12 CN CN202380072678.8A patent/CN120418810A/zh active Pending
- 2023-10-12 EP EP23877991.2A patent/EP4602523A1/fr active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| WO2024081350A1 (fr) | 2024-04-18 |
| CN120418810A (zh) | 2025-08-01 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11847572B2 (en) | Method, system, and computer program product for detecting fraudulent interactions | |
| US11741475B2 (en) | System, method, and computer program product for evaluating a fraud detection system | |
| US12229779B2 (en) | Method, system, and computer program product for detecting group activities in a network | |
| US20230018081A1 (en) | Method, System, and Computer Program Product for Determining Relationships of Entities Associated with Interactions | |
| US12423713B2 (en) | Method, system, and computer program product for fraud prevention using deep learning and survival models | |
| US12333591B2 (en) | Method, system, and computer program product for providing product data and/or recommendations | |
| US11144919B2 (en) | System, method, and computer program product for guaranteeing a payment authorization response | |
| US11900230B2 (en) | Method, system, and computer program product for identifying subpopulations | |
| US20220245516A1 (en) | Method, System, and Computer Program Product for Multi-Task Learning in Deep Neural Networks | |
| US11748386B2 (en) | Method, system, and computer program product for managing source identifiers of clustered records | |
| US20250124298A1 (en) | Method and System for Adversarial Training and for Analyzing Impact of Fine-Tuning on Deep Learning Models | |
| US20240105197A1 (en) | Method and System for Enabling Speaker De-Identification in Public Audio Data by Leveraging Adversarial Perturbation | |
| US20230351431A1 (en) | System, Method, and Computer Program Product for Segmenting Users Using a Machine Learning Model Based on Transaction Data | |
| EP4602523A1 (fr) | Système, procédé et produit programme d'ordinateur pour générer un modèle d'apprentissage automatique sur la base de noeuds d'anomalie d'un graphe | |
| US20220138501A1 (en) | Method, System, and Computer Program Product for Recurrent Neural Networks for Asynchronous Sequences | |
| US20220300755A1 (en) | Method, System, and Computer Program Product for Predicting Future States Based on Time Series Data Using Feature Engineering and/or Hybrid Machine Learning Models | |
| WO2024076656A1 (fr) | Procédé, système, et produit programme d'ordinateur pour un apprentissage multitâche sur des données chronologiques | |
| WO2023014567A1 (fr) | Procédé et système pour structure permettant de surveiller un risque de règlement de crédit d'acquéreur | |
| EP4602515A1 (fr) | Procédé, système et produit programme informatique pour fournir une structure permettant d'améliorer la discrimination de caractéristiques de graphe par un réseau neuronal de graphe | |
| WO2025110999A1 (fr) | Procédé, système et produit-programme informatique pour l'utilisation d'un apprentissage par renforcement pour augmenter la précision d'étiquette de modèle d'apprentissage automatique | |
| WO2023215043A1 (fr) | Système, procédé et produit programme d'ordinateur pour l'apprentissage actif dans des réseaux neuronaux de graphique par réduction d'incertitude hybride |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20250512 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC ME MK MT NL NO PL PT RO RS SE SI SK SM TR |