[go: up one dir, main page]

WO2025118110A1 - Automated account takeover detection and prevention - Google Patents

Automated account takeover detection and prevention Download PDF

Info

Publication number
WO2025118110A1
WO2025118110A1 PCT/CN2023/136170 CN2023136170W WO2025118110A1 WO 2025118110 A1 WO2025118110 A1 WO 2025118110A1 CN 2023136170 W CN2023136170 W CN 2023136170W WO 2025118110 A1 WO2025118110 A1 WO 2025118110A1
Authority
WO
WIPO (PCT)
Prior art keywords
computing
action
metrics
values
actions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2023/136170
Other languages
French (fr)
Inventor
Yuan Cheng
Haoyue HU
Yan Zhang
Zhou FANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PayPal Inc
Original Assignee
PayPal Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PayPal Inc filed Critical PayPal Inc
Priority to PCT/CN2023/136170 priority Critical patent/WO2025118110A1/en
Publication of WO2025118110A1 publication Critical patent/WO2025118110A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/401Transaction verification
    • G06Q20/4016Transaction verification involving fraud or risk level assessment in transaction processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/389Keeping log of transactions for guaranteeing non-repudiation of a transaction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q20/00Payment architectures, schemes or protocols
    • G06Q20/38Payment protocols; Details thereof
    • G06Q20/40Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check credit lines or negative lists
    • G06Q20/409Device specific authentication in transaction processing
    • G06Q20/4093Monitoring of device authentication

Definitions

  • This disclosure relates to detecting and responding to a heightened risk of fraudulent takeover of electronic accounts.
  • Accounts in electronic systems may be held by users and required for interaction with the electronic system. Being logged in through an account may entitle an electronic device to act on the associated user’s behalf within the electronic system, to retrieve and modify data associated with the account-holding user, and to conduct transactions involving goods, services, files, currency, etc. to which the account has access. Accordingly, when a user’s account is fraudulently accessed-referred to herein as an account takeover-anything of value associated with the user’s account may be at risk. Where a loss is sustained, the loss may be borne by the user, by another transacting party, by the proprietor of the electronic system, or by some other party.
  • FIG. 1 is a block diagram view of an example system for detecting account takeover and performing a responsive action.
  • FIG. 2 is a flow chart illustrating an example method for detecting account takeover and taking responsive action.
  • FIG. 3 is a flow chart illustrating an example method of determining account takeover risk based on activity of similar accounts.
  • FIG. 4 is a diagrammatic view of a system for performing the method of FIG. 3.
  • FIG. 5 is a flow chart illustrating an example method of determining account takeover risk based on similar past activity.
  • FIG. 6 is a diagrammatic view of a system for performing the method of FIG. 5.
  • FIG. 7 is a block diagram of an example computing system.
  • Account takeover fraud attacks are, in some electronic ecosystems, an irregular but significant threat. Because attacks are irregular, constant monitoring of user activity, including but not limited to account access activity (e.g., logins) , account modification activity, and inter-party transactions, to detect fraudulent account takeover attacks early can have substantial benefit.
  • Known approaches for detecting ATO generally include monitoring activity of many accounts for known patterns of fraudulent activity that are employed by known fraudulent parties.
  • An account takeover may present risk for end users as well as the computing systems that host those accounts, such as merchant systems and other transaction systems.
  • fraudulent activity may include use of an end user account to make purchases with the hacked user’s funds, but with the goods directed to a fraudulent address.
  • the fraudulent transaction will generally be made with a different computing device-with a different IP address, a different location, a different device identifier, etc. -than legitimate transactions associated with the accounts.
  • One of the user, the merchant that unwittingly made the fraudulent sale, or the transaction processing system that processed the fraudulent transaction must ultimately bear the cost of the fraudulent transaction.
  • Patterns that may indicate fraudulent activity are not always associated with fraudulent activity, and thus categorically classifying such patterns as fraudulent can result in declining or rejecting activity by legitimate parties.
  • responsive action will only be taken in response to a large-scale attack in order to avoid rejecting legitimate activity, but where the indicators of such an attack were present before the attack.
  • known approaches are generally retrospective, rather than predictive.
  • a user adding a new phone to an account may be deemed a risky fraud pattern because fraudulent parties tend to add a new phone to stolen accounts in order to pass authorization challenges.
  • legitimate users also sometimes add new phones to their accounts, making this pattern indistinguishable between legitimate users and fraudulent parties, especially when there are no fraud attacks.
  • adding a new phone will appear more often among fraudulent parties than among legitimate users and the pattern will be strong enough that responsive action may be taken.
  • the instant disclosure improves upon known methods for detecting account takeover through automated monitoring of account-based activity patterns and activity-type patterns and automated responsive actions in response to such monitoring. Further, known approaches to detecting account takeover are generally confined to individual domains-new phone additions data may be monitored for fraud separately from inter-party activity, and further separately from account information changes, and so on. The instant disclosure also improves upon known approaches by consolidating disparate types of activity and monitoring it in combination for indicators of account takeover.
  • FIG. 1 is a block diagram of an example networked system 100 for detecting account takeover and performing responsive actions.
  • the system 100 may include an ATO detection system 102, a source of prior transaction data 104, a transaction processing system 106, a plurality of user devices 108 (two such user devices 108a, 108b are shown) , and one or more merchants 128 (two such merchants 128a, 128b are shown) .
  • the user devices 108 and merchants 128 may be in electronic communication with the transaction processing system 106 and with each other over a network 110.
  • the ATO detection system 102, prior transaction data source 104 and transaction processing system 106 may also all be in electronic communication with each other via the network 110 and/or another network.
  • the ATO detection system 102 may include a processor 112 and a non-transitory, computer-readable memory 114 that contains instructions that, when executed by the processor, cause the ATO detection system 102 to perform one or more of the steps, processes, methods, operations, etc. described herein with respect to the ATO detection system 102.
  • the ATO detection system 102 may include one or more functional modules embodied in the memory.
  • the functional modules may include an account grouping module 116, a metric calculation module 118, an outlier detection module 120, a transaction clustering module 122, a cluster risk detection module 124, and an auto-action module 126.
  • the instant disclosure refers to accounts, users, merchants, and transactions and other electronic activity.
  • Such accounts may be accounts common to a particular service provider, a particular network, a particular electronic activity processor, a particular merchant, etc.
  • the accounts may be accounts with the transaction processing system 106, and the users may be legitimate users associated with those accounts.
  • the merchants may be merchants offering goods and services for sale, which sales may be processed by the transaction processing system.
  • the electronic transactions and other activity may be transactions processed by, or other activity in or through, the transaction processing system 106, and/or transactions and activity outside of the transaction processing system 106. Transactions may be between a user and a merchant, or between a user and another user.
  • the account grouping module 116 may receive, as input, characteristics of a plurality of user accounts, such as accounts associated with a particular service provider, a particular network, a particular merchant 128, a particular electronic activity processor, etc., and may define groups of accounts. Once account groupings are defined, all accounts within a given group may be treated in the same fashion for account takeover risk level and responsive actions.
  • the metric calculation module 118 may be configured to calculate a variety of metrics for each of a variety of time periods, each metric respective of each account grouping defined by the account grouping module 116.
  • metrics may include a quantity of active accounts for a time period, a quantity of transactions or other activity by the accounts within the time period, a quantity of retracted (e.g., withdrawn or cancelled) transactions, a quantity of loss within the time period, and/or a quantity or existence of certain types of actions.
  • the metric calculation module may calculate, for each time period, a value for each of these or other metrics.
  • a time period may be, for example, an hour, a half of a day, a day, three days, a week, etc.
  • the outlier detection module 120 may compare metric values calculated for a present time period or a most recent time period to the values of the same metrics for previous time periods to determine if the present or most recent time period is an outlier relative to the previous values for one or more metrics. Accordingly, the outlier detection module 120 may store, in conjunction with the metric calculation module 118, historical values for one or more metrics for one or more time periods and one or more account groups. Outliers may be indicative of heightened account takeover risk for the accounts in the relevant account group, in some embodiments.
  • the account grouping module 116, metric calculation module 118, and outlier detection module 120 may cooperatively identify accounts at risk of an account takeover. In response to an heightened account-based account takeover risk, an appropriate action may be taken automatically, as will be discussed below.
  • the transaction clustering module 122 may receive records of a plurality of transactions for a recent time period and may determine a risk of an account takeover for particular types of transactions, that is, transactions having a particular profile or a particular combination of characteristics. Characteristics of transactions that may be considered include, for example, a flow or sequence of interactions leading to the transaction, a flow through which the account logged in before the transaction, account login channel, and location (e.g., account-holder residence country, geographic origin of transaction instruction, etc. ) .
  • a login flow or channel may include, for example, login via a merchant application on the customer’s device 108, via a website of the merchant 128, via an application associated with the transaction processing system 106, etc. In some embodiments, different combinations of values of these and/or other transaction characteristics may define transaction clusters.
  • the cluster risk detection module 124 may determine, for each transaction cluster, the risk of account takeover based on transactions and outcomes within those clusters. For example, the cluster risk detection module may consider a transaction volume, a payment volume, a quantity of retracted transactions, a rate or percentage of disputed transactions, and/or other transaction volumes and outcomes within each cluster. Based on those volumes and outcomes, the cluster risk detection module may define one or more transaction clusters as at risk for account takeover, that is, indicative of an account takeover.
  • the cluster risk detection module 124 may determine a set of one or more clusters that are high-risk on a periodic basis (e.g., daily) and may establish and store rules for each next period that define which clusters are high-risk and what should be done in response to further transactions having the characteristic of each high-risk cluster, according to how high the risk is determined to be by the cluster risk detection module 124.
  • a periodic basis e.g., daily
  • the transaction clustering module 122 and cluster risk detection module 124 may identify transaction profiles that are indicative of an account takeover, so that action may be taken in response to further transactions that meet a high-risk transaction profile, as discussed below.
  • the auto-action module 126 may receive identification of risky account groups from the outlier detection module 120 and identification of risky transaction profiles from the cluster risk detection module 124 and may take responsive action as appropriate. For example, the auto-action module 126 may automatically notify every account that is classified as in a high-risk account grouping, such as via a notification email, a notification in an application associated with the transaction processing system 106, a notification text, etc. The notification may prompt the account holder to, for example, change their password, be aware of any phishing or other human engineering efforts, to enable a second authentication factor for logging in, or to take some other preventative measure.
  • the auto-action module 126 may automatically notify one or more entities that host, onboard, or interact with accounts identified as risky, such as a merchant 128 or payment processor that hosts one or more accounts that fit a risky account group profile, or a merchant 128 or payment processor with a history of transacting with accounts that fit a risky account group profile. Additionally or alternatively, where the auto-action module 126 is separate from a transaction processing system 106 that hosts one or more accounts in a risky account grouping, the notification may be transmitted to such a transaction processing system 106. Such a notification may prompt the merchant, payment processor, or transaction processing system 106 to, for example, delay or refuse transactions with accounts matching high-risk profiles, to lock or require a password change from the affected accounts, etc.
  • the auto-action module 126 may respond to transactions that share characteristics with a transaction cluster identified as risky by the cluster risk detection module 124, such as by rejecting such transactions or requiring a second authentication factor from the initiating user to confirm the transaction instruction. Accordingly, the auto-action module 126 may receive transactions instructed through the transaction processing system 106 and may hold approval and/or denial authority over such transactions.
  • the source of prior transaction data 104 may include records respective of a plurality of prior account activity on the transaction processing system 106 and/or other computing activity environment or system (s) .
  • the prior transaction data may include, for each transaction, the transacting accounts, any third-party services or systems involved, the time of the transaction, the geographic locations of the transacting devices, and outcome of each transaction (e.g., whether the transaction was final, was retracted (e.g., disputed) , or other outcome) .
  • the prior transaction data may be used by the ATO detection system 102 to detect risk of account takeover on an account basis and/or on a transaction basis.
  • Both users and merchants 128 may initiate transactions, review transactions, complete transactions, etc. through the transaction processing system 106. Accordingly, the transaction processing system 106 may receive, from user computing devices 108 or merchants 128, instructions to initiate a transaction, an instruction to accept or complete a transaction, an instruction to review one or more transactions, an instruction to retract a transaction, etc., and may respond by performing or facilitating the requested user or merchant action.
  • the instruction may be received from a server associated with a merchant (e.g., a server hosting an application or website of the merchant) , with the instruction providing information respective of the transacting parties (e.g., the merchant’s information and the user’s information) and details of the transaction (e.g., the goods exchanged and the cost) , and the transaction processing system 106 subsequently processing the transaction, including collecting any needed further information (e.g., payment information from the user, a user login to the transaction processing system 106, and so on) .
  • a server associated with a merchant e.g., a server hosting an application or website of the merchant
  • the instruction providing information respective of the transacting parties (e.g., the merchant’s information and the user’s information) and details of the transaction (e.g., the goods exchanged and the cost)
  • the transaction processing system 106 subsequently processing the transaction, including collecting any needed further information (e.g., payment information from the user, a user login to the transaction processing system 106
  • such an instruction from a merchant may be received by the transaction processing system 106 from a subroutine or subprogram associated with a particular merchant executing in an environment operated by the transaction processing system 106 (e.g., a website or application of the transaction processing system 106) .
  • user activity as discussed herein may include transactions instructed through the transaction processing system 106, in some embodiments, and/or user activity on one or more platforms, networks, etc.
  • Such transactions may include, for example, a computing transaction such as a file creation, a revision to a file, an electronic communication, a financial transaction (or component thereof) , a real-estate transaction (or component thereof) , a service request, or any other electronic transaction.
  • user activity according to the present disclosure may be or may include an event associated with a user, such as a user navigation to a webpage, a user search request, etc.
  • the transaction processing system 106 may be associated with a particular electronic user interface and/or platform through which users and merchants perform electronic transactions (e.g., any of merchant-to-user transactions, user-to-user transactions, and merchant-to-merchant or other business-to-business transactions) .
  • the electronic user interface may be embodied in a website, mobile application, etc.
  • the transaction processing system 106 may be associated with or wholly or partially embodied in one or more servers, which server (s) may host the interface, and through which the user computing devices 108 and merchants 128 may access the user interface.
  • the user computing devices 108 may be respectively associated with different user accounts. That is, user computing device 108a may be associated with a first user account, and user computing device 108b may be associated with a second user account. Where user computing devices are discussed herein, it may be assumed that different devices are associated with different user accounts for convenience of description, though of course a single user account may be accessed from multiple devices in practical use.
  • different merchants 128 may be associated with different computing resources (e.g., different servers, different applications, different transacting locations, etc. ) .
  • merchant 128a may be based in Country A with its servers in Country B and its payment accounts and payment receipt accounts locations in Country C
  • merchant 128b may be based in Country B with its servers in Country C and its payment accounts and payment receipt accounts locations in Country D.
  • FIG. 2 is a flow chart illustrating an example method 200 of detecting account takeover risk and performing a responsive action.
  • the method 200 may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
  • the method 200 may include, at operation 202, receiving past transaction data.
  • the past transaction data may include, for example, data from the prior transaction data source 104.
  • the method 200 may further include, at operation 204, determining, periodically, an account-based account takeover risk.
  • Operation 204 may include, for example, determining account groupings, determining account metric values for a plurality of past time periods and a current time period, and determining outlier metric values for the current time period, as discussed above with respect to the account grouping module 116, the metric calculation module 118, and the outlier detection module 120.
  • a detailed example of operation 204 is described with respect to the method 300 of FIG. 3 below.
  • Operation 204 may include determining account-based takeover risk, for example, on an hourly basis, a daily basis, a weekly basis, etc.
  • the method 200 may further include, at operation 206, determining, periodically, a transaction-based account takeover risk.
  • Operation 206 may include, for example, determining transaction clusters and determining account takeover risk with respect to each cluster, as discussed above with respect to the transaction clustering module 122 and the cluster risk detection module 124. A detailed example of operation 206 is described with respect to the method 500 of FIG. 5 below.
  • Operation 206 may include determining transaction-based takeover risk, for example, on an hourly basis, a daily basis, a weekly basis, etc.
  • the method may further include, at operation 208, responding, continuously, to an elevated risk of account takeover at either of operations 204, 206.
  • operation 208 may include notifying accounts at risk of takeover, and/or repositories of those accounts (e.g., a merchant or third party service provider, where the heightened risk is based on the association of the accounts with the merchant or third party service provider) according to operation 204 and responding to further transactions that match transaction profiles identified as high-risk at operation 206. Examples of operation 208 are described in both methods 300, 500 below.
  • Operations 204, 206 may be performed substantially in parallel, such that an electronic transaction ecosystem is monitored for account takeover risk on an account basis (operation 204) and on a transaction basis (operation 206) .
  • the past transactions that are used to establish baseline risk values may be the same, or may at least partially overlap, between operations 204 and 206.
  • FIG. 3 is a flow chart illustrating an example method 300 of determining, periodically, an account-based account takeover risk.
  • the method 300 may be an embodiment of operations 204, 208 of the method 200 of FIG. 2.
  • the method 300, or one or more portions of the method 300 may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
  • FIG. 4 is a diagrammatic view of a system 400 for performing the method 300 of FIG. 3. The method 300 will be described in conjunction with the system 400.
  • the method 300 includes, at operation 302, defining a plurality of account groupings for monitoring according to the characteristics of those accounts.
  • Account groupings, or groups may be determined according to one or both of account profile information (e.g., bibliographic information respective of the account) and account behaviors.
  • account groupings, or groups may be determined based on account characteristics, or combinations or characteristics, that have been indicative of fraudulent behavior or takeover risk based on past transaction data, including geographic origin, age of account, transaction volume, time of day for transactions, and other account characteristics.
  • Account groupings may be or may include groups of end user accounts with common characteristics (e.g., where a first grouping has a first set of common characteristics among the accounts in the group, a second grouping has a different second set of common characteristics, and so on) .
  • a combined profile and behavior characteristic that may be considered is a mismatch between the country of origin of the account profile and the country from which an account was logged in, with such a mismatch being potentially indicative of fraudulent behavior, especially if a given pattern (particular login country different from countries of many profiles) has a high volume in a given period of time.
  • Operation 302 may include assigning each of a plurality of accounts to a single respective grouping, in some embodiments. That is, each account may belong to only a single account group.
  • account groups may be defined such that a single account may belong to multiple groups. Further, in some embodiments, groups may be defined such that only a subset of all possible accounts (e.g., only a subset of the accounts of the transaction processing system 106) are in a defined group. In other embodiments, groups may be defined such that all possible accounts are collectively included in the defined groups.
  • each account group may include a target set of accounts for monitoring.
  • the method 300 may further include, at operation 304, for each account group, receiving transaction data for a plurality of sub-periods of a recent past time period.
  • the transaction data is shown as the transaction data source 104 in FIG. 4.
  • the transaction data may include data respective of each of the account groups.
  • the transaction data received at operation 304 may be respective of multiple time sub-periods within a larger time period.
  • the transaction data may be respective of each day (sub-periods) within a plurality of days (period) (e.g., 15 days, 30 days, 60 days, 90 days, 120 days, etc. ) , with each day being a respective discrete time sub-period.
  • the transaction data may be respective of a plurality of weeks (e.g., 4 weeks, 8 weeks, 24 weeks, 52 weeks, etc. ) , with each week being a discrete time sub-period.
  • the method 300 may further include, at operation 306, calculating values of a plurality of risk metrics for each sub-period and for each account group.
  • An example set of account metrics is shown in block 402 in FIG. 4, which shows Account Metric A, Account Metric B, ..., Account Metric P.
  • Such metrics may include, for example, a number of users, a total transaction volume, a quantity of retracted transactions (e.g., cancelled or disputed) , a total loss within the group (e.g., gross loss or net loss) , a quantity of loss that is addressed by existing actions or solutions, and/or other metrics.
  • Each calculated metric value, for each sub-period and each account group, may then be stored in a repository of metric values, such as the historical metric values database 404 shown in FIG. 4.
  • the historical metrics database 404 may be included in the ATO detection system 102 and/or the prior transaction data source 104, in embodiments.
  • Operation 306 may be performed on a periodic basis, e.g., for the most recent time sub-period at the end of that sub-period.
  • the historical metrics database 404 may include a set of metric values for each account group for a large number of past time periods (e.g., more time periods than are considered for outlier detection, as discussed below) .
  • operation 306 may additionally include discarding metric values that are older than a threshold in order to conserve data storage space and improve database or other memory efficiency, whereby the historical metrics database 404 stores only the time duration of metrics calculations that are intended to be used for outlier detection, as discussed below.
  • operation 306 may include calculating values for the same metrics for all account groups. In other embodiments, operation 306 may include calculating values for different metrics for different account groups. Because different groups may have different characteristics, or may be defined based on different behaviors or profile factors, different metrics may be more appropriate for one group than for another in assessing fraud risk.
  • the method 300 may further include, at operation 308, discarding a top-N highest values for each metric and each account group to generate comparison values for each combination of metric and account group.
  • a set of comparison values may exist for each metric and each account group, and the set of comparison values may be used to determine whether a particular time period is an outlier with respect to one or more metrics, which may indicate a heightened risk of account takeover. Discarding a certain number of values at operation 308 may ensure that prior outliers are not used in setting baseline metric values.
  • the number N of values that are discarded may be selected based on a specific quantity (e.g., five values) , based on a percentage of values (e.g., ten percent of values, twenty percent of values, fifty percent of values, etc. ) , based on a deviance (e.g., values that are two or more standard deviations from the mean value) , or based on some other selection.
  • Discarding values may include, in some embodiments, ignoring the discarded values for outlier determination, through the discarded values may remain stored in the historical metrics database 404.
  • the method 300 may further include, at operation 310, for each account group, receiving present transaction data.
  • the present transaction data may be transaction data for all accounts in an account group for a most recent time sub-period (e.g., the most recent day, the most recent week, etc. ) .
  • the present transaction data may include, for example, details of parties (e.g., merchants) with which each account has transacted, the goods or services exchanged in those transactions, the payment types used in those transactions, geographic locations associated with those transactions (e.g., of the computing devices of the transacting parties, of the “home” location of the transacting user (s) and/or merchants) , a quantity of transactions engaged in by each account, and/or any other data that may be associated with a transaction entered into by a party consistent with this disclosure.
  • parties e.g., merchants
  • geographic locations associated with those transactions e.g., of the computing devices of the transacting parties, of the “home” location of the transacting user (s) and/or merchants
  • a quantity of transactions engaged in by each account e.g., a quantity of transactions engaged in by each account, and/or any other data that may be associated with a transaction entered into by a party consistent with this disclosure.
  • the method 300 may further include, at operation 312, calculating values of each risk metric based on the present transaction data for each account group.
  • Operation 312 may include calculating metric values for the same groups and the same metrics as were determined at operation 306, but for the most recent time sub-period.
  • Operations 310 and 312 may be performed periodically, e.g., at the end of every sub-period. For example, operations 310 and 312 may be performed at the end of every hour, every day, every week, etc.
  • operations 304, 306 may be collectively performed over time through repeated performance of operations 310, 312.
  • the method 300 may further include, at operation 314, determining, for each account group, if the present transaction data metric values deviate from the comparison metric values (e.g., if the present metric values are outliers) , as shown at block 406 in FIG. 4.
  • adaptive standard deviation and “adaptive” average refer to calculations performed after the top-N values for the metric are discarded at operation .
  • Operation 314 may further include, for each account group and each metric, normalizing the deviation amount with the adaptive standard deviation of historical metrics in a 90-day time window (A-STD (d-1, d-90) ) . If the EG ratio (EGSTD) , calculated according to equation (2) below, is more than a predetermined threshold (e.g., 500) , the most recent time period metric may be considered an outlier:
  • a predetermined threshold e.g. 500
  • the method 300 may further include, at operation 316 and block 408, notifying users associated with the accounts in accounts groups, or notifying the hosts of such accounts, that deviate from comparison metric values.
  • a notification may be or may include, for example, a notification email, a notification in an application associated with the transaction processing system 106, a notification text, etc.
  • the notification may prompt a user, for example, to change their password or review their transaction history for fraudulent transactions. If sent to an account host, the account host may lock high-risk accounts, prompt the users of high-risk accounts to change their passwords, etc.
  • the system 400 may provide output in addition to notifications, in some embodiments.
  • the system 400 may include a visualization user interface 410 in which metric values and other calculations of the method 300 may be displayed for a user.
  • the visualization interface may be hosted by the ATO detection system 106 and may be accessible to the proprietor of the transaction processing system 106 and its employee and contractor users that maintain the transaction processing system 106, rather than to the end users of the transaction processing system 106.
  • the visualization interface 410 may be used by the transaction processing system 106 to monitor ATO trends and more granular data to, for example, manually intervene as to one or more account groups, to alter, add, or remove account group definitions, etc.
  • the method 300 and system 400 may be employed to provide periodic monitoring of accounts for account takeover risk. For example, accounts may be grouped, and then each group may be assessed independently on a day-by-day, week-by-week, or other basis to determine risk levels. Where a particular account group is determined to be at a heightened risk (because a metric value respective of that group is an outlier for the most recent day, week, etc. ) , corrective action may be taken automatically with respect to each account in that particular group. As noted above, the corrective action may be a notification. Accounts that are not grouped, or that are not in the group determined to be at heightened risk, may not be subject to the corrective action.
  • the method 300 and system 400 may provide improved account-based fraud detection that may be broadly applicable to accounts across a variety of account hosts, rather than only to the entity from which relevant transactions originated. Furthermore, by outputting a notification when a heightened risk of fraud is detected, the method 300 and system 400 may enable more tailored responses to heightened fraud risk depending on the host type, as an appropriate responsive action may be different where a transaction processing system is an account host than where a merchant is an account host. Accordingly, the method 300 and system 400 may improve the technical field of fraud detection by applying the
  • FIG. 5 is a flow chart illustrating an example method 500 of determining, periodically, transaction-based account takeover risk.
  • the method 500 may be an embodiment of operations 206, 208 of the method 200 of FIG. 2.
  • the method 500, or one or more portions of the method 500 may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
  • FIG. 6 is a diagrammatic view of a system 600 for performing the method 500 of FIG. 5. The method 500 will be described in conjunction with the system 600.
  • the method 500 may include two general portions: an offline, periodic portion (indicated by dashed box 550 in FIG. 5 and dashed box 650 in FIG. 6) and an online, continuous portion (indicated by dashed box 560 in FIG. 5 and dashed box 660 in FIG. 6) .
  • the offline, period portion 550, 650 may be performed, for example, at the end of a relevant monitoring period in order to define characteristics of transactions that are classified as high-risk.
  • the online, continuous portion 560, 660 may then apply the most recently-defined risky transaction characteristics to detect risky transactions on a continuous basis as those transactions are instructed, and to respond appropriately.
  • the method 500 may include, at operation 502, receiving past transaction data (shown as prior transaction data 104 in FIG. 6) .
  • the past transaction data may include, for example, data respective of all transactions performed within a certain service, domain, processor, etc. (e.g., through the transaction processing system 106) within a certain period of time. For example, all transactions for a most recent day, two days, three days, one week, two weeks, etc. may be received.
  • the method 500 may further include, at operation 504, clustering past transactions according to transaction characteristics.
  • Characteristics of transactions that may be considered include, for example, a flow or sequence of interactions leading to the transaction (e.g., a user activity flow immediately before the transaction) , a flow through which the account logged in before the transaction (e.g., a user login flow of a user that initiated the computing action) , account login channel (e.g., an access channel of a computing system that initiated the transaction) , and location (e.g., account-holder residence country, geographic origin of transaction instruction, etc. ) .
  • different combinations of values of these and/or other transaction characteristics may define transaction clusters.
  • a plurality of clusters may be defined at operation 504.
  • FIG. 6 illustrates an example plurality of clusters 602-cluster 1, cluster 2, ..., cluster N.
  • the method 500 may further include, at operation 506, calculating a risk of an account takeover for each transaction cluster based on the past transactions.
  • the risk may be calculated, in some embodiments, by calculating one or more metrics for each transaction cluster using those metric values as input to an objective function, and comparing the value of that objective function to a threshold, as discussed in detail below.
  • Operation 506 may include calculating metric values for each transaction cluster, where the metrics may include, for example, a quantity of transactions, a transaction volume (e.g., in terms of goods exchanged, files exchanged, currency exchanged, etc. ) , a quantity of retracted transactions (QT) , a rate (e.g., percentage) of retracted transactions (RT) , and/or one or more other metrics.
  • a transaction volume e.g., in terms of goods exchanged, files exchanged, currency exchanged, etc.
  • QT quantity of retracted transactions
  • RT rate
  • Such transaction-focused metrics are shown at block 604 in FIG. 6 as Tx metric A, Tx metric B, ... Tx metric M.
  • the values of one or more of the metrics may be input into an objective function, such that the objective function will have a value respective of each transaction cluster.
  • the objective function of equation (3) incorporates the retracted transaction quantity and retracted transaction rate
  • different metrics may be included in the objective function if data (e.g., data respective of past account takeovers and associated transactions) indicates that those other metrics correlate with fraudulent activity.
  • data e.g., data respective of past account takeovers and associated transactions
  • the objective function of equation (3) false positives can be avoided due to a high retracted transaction rate at low volume, or due to a large retracted transaction quantity but where that large quantity came in the context of a massive sample size.
  • the value of the objective function, as to each cluster, may be compared to an objective function value threshold, and any transaction cluster having an objective function value above the objective value threshold may be classified as high-risk.
  • one of more filters may be applied to clusters before applying the objective function to clusters, or before classifying clusters as high-risk. For example, only clusters that have at least a threshold quantity of retracted transactions, and/or at least a threshold rate of retracted transactions, may be considered high risk. Accordingly, in some embodiments, operation 506 may include comparing the rate of retracted transactions of each cluster to a retracted transaction rate threshold, comparing the quantity of retracted transactions to a retracted transaction quantity threshold, and/or comparing one or more other metrics to appropriate thresholds, and including only clusters that exceed such thresholds in consideration for high-risk clusters. Referring to FIG. 6, operation 506 may result in a set 606 of high-risk cluster definitions.
  • the method 500 may further include, at operation 508, assigning automated actions to transaction clusters (e.g., to future transactions matching the cluster’s characteristics) having a risk above a threshold, i.e., the clusters designated as high-risk at operation 506.
  • Such automated actions may include, for example, declining a transaction or requiring a second authentication factor before processing a transaction.
  • Operation 508, in conjunction with operation 506, may further include uploading the high-risk cluster definitions (e.g., the characteristics of each high-risk cluster) and the associated automated action to a database (shown as RADD ( “Risk Analytics Dynamic Dataset” ) 608 in FIG. 6) .
  • the database 608 may be updated on a periodic basis for going-forward application.
  • operations 502, 504, 506, 508 may be repeated periodically, with each repetition using a different time period, so that the database 608 is updated for application to the next period’s transactions.
  • the database 608 may be updated on a daily basis, a weekly basis, one a monthly basis, etc.
  • operations 502, 504, 506, 508 may be performed in a batch process at the end of the time period, or during a sub-period within the time period when the computing resources of the relevant system are less strained.
  • the operations 502, 504, 506, 508 may be performed at a time of day in which the transaction processing system 106 regularly experiences lower transaction volume.
  • the cluster definitions and automatic actions stored in the RADD 608 may be considered a set of auto-action rules for responding to transactions.
  • the method 500 and system 600 provide an approach for effectively combating fraud in a computationally-efficient way that improves the functioning of anti-fraud computing systems. For example, by processing large quantities of transactions in an offline manner to generate simple rules, the real-time processing load for fraud detection is relatively low and can be executed with relatively little processing demand on a per-transaction basis, enabling faster execution.
  • the method 500 may further include, at operation 510, receiving a transaction request for a transaction matching the characteristics of a high-risk transaction cluster.
  • the transaction request may be one that is received and on which a decision must be made substantially in real time, where the decision is whether to approve the transaction and other details of how to process the transaction.
  • the transaction request may be received by, or from, the transaction processing system 106.
  • Operation 510 may include comparing characteristic of the received transaction request to characteristics of transaction clusters stored in the database 608 and concluding that the received transaction matches one of the high-risk transaction clusters based on matching characteristics of the transaction and the cluster.
  • the method 500 may further include, at operation 512, applying the assigned automated action in response to the transaction request that matches the characteristics of the high-risk transaction cluster, as shown at block 610 of FIG. 6.
  • Operation 512 may include retrieving the assigned automated action from the database 608 and applying the assigned automated action in response to the transaction request.
  • Operations 510, 512 may be performed on a continuous basis. Accordingly, in some embodiments, the method 500 may include receiving all new transaction requests as those requests are made and comparing the transaction requests to stored high-risk transaction cluster definitions (operation 510) and, for each transaction that matches a high-risk cluster, applying the stored responsive action associated with that cluster automatically (block 512) .
  • the automated rules, and their responsive actions may be made available to and applied to transactions occurring through or with a variety of sources.
  • the automated rules and responsive actions may be used in connection with a transaction processing system 106.
  • the automated rules and responsive actions may be made available to payment processors 612 and/or merchants 128, for those payment processors 612 and/or merchants 128 to check transactions in which they may engage against the risky transaction profiles and take appropriate responsive action.
  • FIG. 7 is a block diagram of an example computing system, such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium.
  • a computing system 700 such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium.
  • FIG. 7 is a block diagram of an example computing system, such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium.
  • FIG. 7 is a block diagram of an example computing system, such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium.
  • the various tasks described hereinafter may be practiced in a distributed environment having multiple computing systems 700 linked via a local or wide-area network
  • computing system environment 700 typically includes at least one processing unit 702 and at least one memory 704, which may be linked via a bus 706.
  • memory 704 may be volatile (such as RAM 710) , non-volatile (such as ROM 708, flash memory, etc. ) or some combination of the two.
  • Computing system environment 700 may have additional features and/or functionality.
  • computing system environment 700 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks, tape drives and/or flash drives.
  • Such additional memory devices may be made accessible to the computing system environment 700 by means of, for example, a hard disk drive interface 712, a magnetic disk drive interface 714, and/or an optical disk drive interface 716.
  • these devices which would be linked to the system bus 706, respectively, allow for reading from and writing to a hard disk 718, reading from or writing to a removable magnetic disk 720, and/or for reading from or writing to a removable optical disk 722, such as a CD/DVD ROM or other optical media.
  • the drive interfaces and their associated computer-readable media allow for the nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing system environment 700.
  • Computer readable media that can store data may be used for this same purpose.
  • Examples of such media devices include, but are not limited to, magnetic cassettes, flash memory cards, digital videodisks, Bernoulli cartridges, random access memories, nano-drives, memory sticks, other read/write and/or read-only memories and/or any other method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Any such computer storage media may be part of computing system environment 700.
  • a number of program modules may be stored in one or more of the memory/media devices.
  • a basic input/output system (BIOS) 724 containing the basic routines that help to transfer information between elements within the computing system environment 700, such as during start-up, may be stored in ROM 708.
  • BIOS basic input/output system
  • RAM 710, hard drive 718, and/or peripheral memory devices may be used to store computer executable instructions comprising an operating system 726, one or more applications programs 728, other program modules 730, and/or program data 732.
  • computer-executable instructions may be downloaded to the computing environment 700 as needed, for example, via a network connection.
  • the applications programs 728 may include, for example, a browser, including a particular browser application and version, which browser application and version may be relevant to determinations of correspondence between communications and user URL requests, as described herein.
  • the operating system 726 and its version may be relevant to determinations of correspondence between communications and user URL requests, as described herein.
  • An end-user may enter commands and information into the computing system environment 700 through input devices such as a keyboard 734 and/or a pointing device 736. While not illustrated, other input devices may include a microphone, a joystick, a game pad, a scanner, etc. These and other input devices would typically be connected to the processing unit 702 by means of a peripheral interface 738 which, in turn, would be coupled to bus 706. Input devices may be directly or indirectly connected to processor 702 via interfaces such as, for example, a parallel port, game port, firewire, or a universal serial bus (USB) . To view information from the computing system environment 700, a monitor 740 or other type of display device may also be connected to bus 706 via an interface, such as via video adapter 733. In addition to the monitor 740, the computing system environment 700 may also include other peripheral output devices, not shown, such as speakers and printers.
  • input devices such as a keyboard 734 and/or a pointing device 736. While not illustrated, other input devices may include a microphone,
  • the computing system environment 700 may also utilize logical connections to one or more computing system environments. Communications between the computing system environment 700 and the remote computing system environment may be exchanged via a further processing device, such a network router 748, that is responsible for network routing. Communications with the network router 748 may be performed via a network interface component 744.
  • a networked environment e.g., the Internet, World Wide Web, LAN, or other like type of wired or wireless network
  • program modules depicted relative to the computing system environment 700, or portions thereof may be stored in the memory storage device (s) of the computing system environment 700.
  • the computing system environment 700 may also include localization hardware 746 for determining a location of the computing system environment 700.
  • the localization hardware 746 may include, for example only, a GPS antenna, an RFID chip or reader, a WiFi antenna, or other computing hardware that may be used to capture or transmit signals that may be used to determine the location of the computing system environment 700.
  • Data from the localization hardware 746 may be included in a callback request or other user computing device metadata in the methods of this disclosure.
  • the computing system may embody a user computing device 108, in some embodiments. Additionally or alternatively, some components of the computing system 700 may embody the ATO detection system 102 and/or transaction processing system 106. For example, the functional modules 116, 118, 120, 122, 124, 126 may be embodied as program modules 730.
  • a computer-implemented method includes (i) receiving, by a computing system, data respective of a plurality of computing actions for a time period, (ii) categorizing, by the computing system, each of the plurality of computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values of a plurality of characteristics of the computing actions, (iii) calculating, by the computing system, for each of the plurality of clusters, a respective risk for the time period, (iv) determining, by the computing system, for one or more clusters of the plurality of clusters, that the respective risk exceeds a threshold, and (v) in response to (iv) , by the computing system, automatically performing a fraud prevention action for further computing actions having the respective combination of respective values of characteristics associated with the one or more clusters, wherein (i) , (ii) , (iii) , and (iv) are performed periodically in a
  • the plurality of characteristics comprises two or more of a geographic origin of the computing action, an access channel of a computing system that initiated the computing action, a user login flow of a user that initiated the computing action, or a user activity flow immediately before the computing action.
  • the fraud prevention action comprises one or more of requiring a second authentication factor of a user the further computing action, or declining the further computing action.
  • calculating the respective risk for the time period comprises determining a respective rate of retracted computing actions for the time period.
  • (v) comprises, for a first one of the further computing actions, requiring a second authentication factor of a user in the first further computing action, and for a second one of the further computing actions, declining the second further computing action.
  • a computer-implemented method of detecting an account takeover associated with computing actions includes (i) receiving, by a computing system, data respective of a first plurality of computing actions for a first time period, the first time period comprising a plurality of sub-periods, (ii) receiving, by the computing system, data respective of a second plurality of computing actions for a second time period, the second time period different from the first time period, (iii) calculating, by the computing system, a plurality of metrics for each of the sub-periods to generate, for each of the metrics, a respective plurality of first metric values, (iv) discarding, by the computing system, for each of the metrics, a set of highest metric values to generate, for each of the metrics, a respective plurality of comparison values, (v) calculating, by the computing system, the plurality of metrics for the second time period to generate, for each of the metrics, a respective second metric value, (vi
  • (i) – (vi) are performed with respect to a target set of computing action accounts, and (vii) comprises transmitting an outlier notification to a respective user associated with each computing action account in the set of computing action accounts.
  • the method further includes defining a plurality of target sets of computing action accounts, wherein (i) – (vi) are performed with respect to each target set of computing action accounts, and wherein the plurality of metrics comprises a first plurality of metrics for a first set of the plurality of target sets of computing action accounts and a second plurality of metrics for a second set of the plurality of target sets of computing action accounts, wherein the first plurality of metrics is different from the second plurality of metrics.
  • determining that at least one of the second metric values is an outlier with respect to the comparison values includes determining, for each of the plurality of metrics, a respective average of the respective comparison values for the metric, calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric, and determining that at least one of the deviations exceeds a predetermined threshold.
  • the method further includes determining, for each of the plurality of metrics, a respective standard deviation of the respective comparison values for the metric, normalizing, for each of the plurality of metrics, the deviation of the second metric value by the respective standard deviation of the metric to calculate a normalized deviation of the second metric value, and determining that at least one of the normalized deviations exceeds a predetermined threshold.
  • one or more of the plurality of sub-periods are of equal duration to each other, or the second time period is of equal duration to at least one of the sub-periods.
  • the plurality of metrics comprise one or more of a number of users, a total computing action volume, a disputed computing action volume, or a total loss.
  • a computer-implemented method of preventing fraudulent computing action activity includes (i) assigning, by a computing system, an account of a user to a target set of computing action accounts, (ii) determining, by the computing system, first respective values for a plurality of metrics for first past computing actions of the target set of computing action accounts, (iii) determining, by the computing system, second respective values for the plurality of metrics for present computing actions of the target set of computing action accounts, (iv) determining, by the computing system, for at least one of the metrics, that the second respective value is an outlier with respect to the first respective values and, in response, transmitting an outlier notification to the user, (v) receiving, by the computing system, data respective of second past computing actions, the second past computing actions being for a time period, (vi) categorizing, by the computing system, each of the second past computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values
  • the method further includes receiving, by the computing system, data respective of the first past computing actions, the first past computing actions for a first time period, the first time period comprising a plurality of sub-periods, and receiving, by the computing system, data respective of the present computing actions, the present computing actions for a second time period, the second time period different from the first time period, wherein determining the first respective values for the plurality of metrics includes calculating values for the plurality of metrics for each of the sub-periods to generate an initial value set, and discarding, by the computing system, for each of the metrics, a set of highest metric values in the initial set to generate, for each of the metrics, the first respective values for the plurality of metrics.
  • the plurality of metrics include one or more of a number of users, a total computing action volume, a disputed computing action volume, or a total loss.
  • determining that at least one of the second metric values is an outlier with respect to the first metric values includes determining, for each of the plurality of metrics, a respective average of the first respective values for the metric, calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric, and determining that at least one of the deviations exceeds a predetermined threshold.
  • the fraud prevention action includes requiring a second authentication factor of the user, or declining the computing action associated with the computing action request.
  • the plurality of characteristics comprises two or more of a geographic origin of the computing action, an access channel of a computing system that initiated the computing action, a user login flow of a user that initiated the computing action, or a user activity flow immediately before the computing action.
  • calculating the risk comprises determining a rate of retracted computing actions.
  • the data is represented as physical (electronic) quantities within the computer system’s registers and memories and is transformed into other data similarly represented as physical quantities within the computer system memories or registers, or other such information storage, transmission, or display devices as described herein or otherwise understood to one of ordinary skill in the art.

Landscapes

  • Business, Economics & Management (AREA)
  • Accounting & Taxation (AREA)
  • Engineering & Computer Science (AREA)
  • Finance (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Financial Or Insurance-Related Operations Such As Payment And Settlement (AREA)

Abstract

A method of preventing account takeover fraud includes assigning a user account to a target set accounts, determining matric values for past and present computing actions of the target set of accounts, determining, for at least one of the metrics, that present metric value is an outlier with respect to the past metric values and, in response, transmitting an outlier notification to the user. The method further includes receiving a computing action request involving the account, the request including computing action characteristic values, determining that the characteristic values match a combination of characteristic values associated with a risk that exceeds a threshold, the risk determined according to past computing actions and, in response, requiring a second authentication factor of the user or declining the computing action associated with the request.

Description

AUTOMATED ACCOUNT TAKEOVER DETECTION AND PREVENTION TECHNICAL FIELD
This disclosure relates to detecting and responding to a heightened risk of fraudulent takeover of electronic accounts.
BACKGROUND
Accounts in electronic systems, such as electronic transaction systems, may be held by users and required for interaction with the electronic system. Being logged in through an account may entitle an electronic device to act on the associated user’s behalf within the electronic system, to retrieve and modify data associated with the account-holding user, and to conduct transactions involving goods, services, files, currency, etc. to which the account has access. Accordingly, when a user’s account is fraudulently accessed-referred to herein as an account takeover-anything of value associated with the user’s account may be at risk. Where a loss is sustained, the loss may be borne by the user, by another transacting party, by the proprietor of the electronic system, or by some other party.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram view of an example system for detecting account takeover and performing a responsive action.
FIG. 2 is a flow chart illustrating an example method for detecting account takeover and taking responsive action.
FIG. 3 is a flow chart illustrating an example method of determining account takeover risk based on activity of similar accounts.
FIG. 4 is a diagrammatic view of a system for performing the method of FIG. 3.
FIG. 5 is a flow chart illustrating an example method of determining account takeover risk based on similar past activity.
FIG. 6 is a diagrammatic view of a system for performing the method of FIG. 5.
FIG. 7 is a block diagram of an example computing system.
DETAILED DESCRIPTION
Account takeover fraud attacks are, in some electronic ecosystems, an irregular but significant threat. Because attacks are irregular, constant monitoring of user activity, including but not limited to account access activity (e.g., logins) , account modification activity, and inter-party transactions, to detect fraudulent account takeover attacks early can have substantial benefit.
Known approaches for detecting ATO (Account TakeOver) generally include monitoring activity of many accounts for known patterns of fraudulent activity that are employed by known fraudulent parties. An account takeover may present risk for end users as well as the computing systems that host those accounts, such as merchant systems and other transaction systems. For example, if a large-scale account takeover occurs through a merchant system (e.g., through a hack of the merchant system) , fraudulent activity may include use of an end user account to make purchases with the hacked user’s funds, but with the goods directed to a fraudulent address. The fraudulent transaction will generally be made with a different computing device-with a different IP address, a different location, a different device identifier, etc. -than legitimate transactions associated with the accounts. One of the user, the merchant that unwittingly made the fraudulent sale, or the transaction processing system that processed the fraudulent transaction must ultimately bear the cost of the fraudulent transaction.
Patterns that may indicate fraudulent activity, however, are not always associated with fraudulent activity, and thus categorically classifying such patterns as fraudulent can  result in declining or rejecting activity by legitimate parties. As a result, in some circumstances, responsive action will only be taken in response to a large-scale attack in order to avoid rejecting legitimate activity, but where the indicators of such an attack were present before the attack. Furthermore, known approaches are generally retrospective, rather than predictive.
In one example, a user adding a new phone to an account may be deemed a risky fraud pattern because fraudulent parties tend to add a new phone to stolen accounts in order to pass authorization challenges. However, legitimate users also sometimes add new phones to their accounts, making this pattern indistinguishable between legitimate users and fraudulent parties, especially when there are no fraud attacks. But when there are fraud attacks, adding a new phone will appear more often among fraudulent parties than among legitimate users and the pattern will be strong enough that responsive action may be taken.
The instant disclosure improves upon known methods for detecting account takeover through automated monitoring of account-based activity patterns and activity-type patterns and automated responsive actions in response to such monitoring. Further, known approaches to detecting account takeover are generally confined to individual domains-new phone additions data may be monitored for fraud separately from inter-party activity, and further separately from account information changes, and so on. The instant disclosure also improves upon known approaches by consolidating disparate types of activity and monitoring it in combination for indicators of account takeover.
Referring now to the drawings, wherein like reference numerals refer to the same or similar features in the various views, FIG. 1 is a block diagram of an example networked system 100 for detecting account takeover and performing responsive actions. The system 100 may include an ATO detection system 102, a source of prior transaction data 104, a transaction processing system 106, a plurality of user devices 108 (two such user devices  108a, 108b are shown) , and one or more merchants 128 (two such merchants 128a, 128b are shown) . The user devices 108 and merchants 128 may be in electronic communication with the transaction processing system 106 and with each other over a network 110. The ATO detection system 102, prior transaction data source 104 and transaction processing system 106 may also all be in electronic communication with each other via the network 110 and/or another network.
The ATO detection system 102 may include a processor 112 and a non-transitory, computer-readable memory 114 that contains instructions that, when executed by the processor, cause the ATO detection system 102 to perform one or more of the steps, processes, methods, operations, etc. described herein with respect to the ATO detection system 102. The ATO detection system 102 may include one or more functional modules embodied in the memory. The functional modules may include an account grouping module 116, a metric calculation module 118, an outlier detection module 120, a transaction clustering module 122, a cluster risk detection module 124, and an auto-action module 126.
The instant disclosure refers to accounts, users, merchants, and transactions and other electronic activity. Such accounts may be accounts common to a particular service provider, a particular network, a particular electronic activity processor, a particular merchant, etc. For example, the accounts may be accounts with the transaction processing system 106, and the users may be legitimate users associated with those accounts. The merchants may be merchants offering goods and services for sale, which sales may be processed by the transaction processing system. The electronic transactions and other activity may be transactions processed by, or other activity in or through, the transaction processing system 106, and/or transactions and activity outside of the transaction processing system 106. Transactions may be between a user and a merchant, or between a user and another user. Although this disclosure refers to transactions as context for the novel methods and systems,  it should be understood that such methods and systems may be applied to or in the context of a wide variety of computing actions, some of which may not be considered transactions. For example, where past transactions are considered herein, past computing actions may more broadly be considered. Similarly, where present transactions are responded to herein, present computing actions may more broadly be responded to.
The account grouping module 116 may receive, as input, characteristics of a plurality of user accounts, such as accounts associated with a particular service provider, a particular network, a particular merchant 128, a particular electronic activity processor, etc., and may define groups of accounts. Once account groupings are defined, all accounts within a given group may be treated in the same fashion for account takeover risk level and responsive actions.
The metric calculation module 118 may be configured to calculate a variety of metrics for each of a variety of time periods, each metric respective of each account grouping defined by the account grouping module 116. For example, such metrics may include a quantity of active accounts for a time period, a quantity of transactions or other activity by the accounts within the time period, a quantity of retracted (e.g., withdrawn or cancelled) transactions, a quantity of loss within the time period, and/or a quantity or existence of certain types of actions. For example, the metric calculation module may calculate, for each time period, a value for each of these or other metrics. A time period may be, for example, an hour, a half of a day, a day, three days, a week, etc.
The outlier detection module 120 may compare metric values calculated for a present time period or a most recent time period to the values of the same metrics for previous time periods to determine if the present or most recent time period is an outlier relative to the previous values for one or more metrics. Accordingly, the outlier detection module 120 may store, in conjunction with the metric calculation module 118, historical  values for one or more metrics for one or more time periods and one or more account groups. Outliers may be indicative of heightened account takeover risk for the accounts in the relevant account group, in some embodiments.
The account grouping module 116, metric calculation module 118, and outlier detection module 120 may cooperatively identify accounts at risk of an account takeover. In response to an heightened account-based account takeover risk, an appropriate action may be taken automatically, as will be discussed below.
The transaction clustering module 122 may receive records of a plurality of transactions for a recent time period and may determine a risk of an account takeover for particular types of transactions, that is, transactions having a particular profile or a particular combination of characteristics. Characteristics of transactions that may be considered include, for example, a flow or sequence of interactions leading to the transaction, a flow through which the account logged in before the transaction, account login channel, and location (e.g., account-holder residence country, geographic origin of transaction instruction, etc. ) . A login flow or channel may include, for example, login via a merchant application on the customer’s device 108, via a website of the merchant 128, via an application associated with the transaction processing system 106, etc. In some embodiments, different combinations of values of these and/or other transaction characteristics may define transaction clusters.
The cluster risk detection module 124 may determine, for each transaction cluster, the risk of account takeover based on transactions and outcomes within those clusters. For example, the cluster risk detection module may consider a transaction volume, a payment volume, a quantity of retracted transactions, a rate or percentage of disputed transactions, and/or other transaction volumes and outcomes within each cluster. Based on those volumes and outcomes, the cluster risk detection module may define one or more transaction clusters as at risk for account takeover, that is, indicative of an account takeover. The cluster risk  detection module 124 may determine a set of one or more clusters that are high-risk on a periodic basis (e.g., daily) and may establish and store rules for each next period that define which clusters are high-risk and what should be done in response to further transactions having the characteristic of each high-risk cluster, according to how high the risk is determined to be by the cluster risk detection module 124.
Like the account grouping module 116, metric calculation module 118, and outlier detection module 120 may identify accounts at risk of an account takeover, the transaction clustering module 122 and cluster risk detection module 124 may identify transaction profiles that are indicative of an account takeover, so that action may be taken in response to further transactions that meet a high-risk transaction profile, as discussed below.
The auto-action module 126 may receive identification of risky account groups from the outlier detection module 120 and identification of risky transaction profiles from the cluster risk detection module 124 and may take responsive action as appropriate. For example, the auto-action module 126 may automatically notify every account that is classified as in a high-risk account grouping, such as via a notification email, a notification in an application associated with the transaction processing system 106, a notification text, etc. The notification may prompt the account holder to, for example, change their password, be aware of any phishing or other human engineering efforts, to enable a second authentication factor for logging in, or to take some other preventative measure. Additionally or alternatively, the auto-action module 126 may automatically notify one or more entities that host, onboard, or interact with accounts identified as risky, such as a merchant 128 or payment processor that hosts one or more accounts that fit a risky account group profile, or a merchant 128 or payment processor with a history of transacting with accounts that fit a risky account group profile. Additionally or alternatively, where the auto-action module 126 is separate from a transaction processing system 106 that hosts one or more accounts in a risky  account grouping, the notification may be transmitted to such a transaction processing system 106. Such a notification may prompt the merchant, payment processor, or transaction processing system 106 to, for example, delay or refuse transactions with accounts matching high-risk profiles, to lock or require a password change from the affected accounts, etc.
Further, the auto-action module 126 may respond to transactions that share characteristics with a transaction cluster identified as risky by the cluster risk detection module 124, such as by rejecting such transactions or requiring a second authentication factor from the initiating user to confirm the transaction instruction. Accordingly, the auto-action module 126 may receive transactions instructed through the transaction processing system 106 and may hold approval and/or denial authority over such transactions.
The source of prior transaction data 104 may include records respective of a plurality of prior account activity on the transaction processing system 106 and/or other computing activity environment or system (s) . The prior transaction data may include, for each transaction, the transacting accounts, any third-party services or systems involved, the time of the transaction, the geographic locations of the transacting devices, and outcome of each transaction (e.g., whether the transaction was final, was retracted (e.g., disputed) , or other outcome) . As described herein, the prior transaction data may be used by the ATO detection system 102 to detect risk of account takeover on an account basis and/or on a transaction basis.
Both users and merchants 128 may initiate transactions, review transactions, complete transactions, etc. through the transaction processing system 106. Accordingly, the transaction processing system 106 may receive, from user computing devices 108 or merchants 128, instructions to initiate a transaction, an instruction to accept or complete a transaction, an instruction to review one or more transactions, an instruction to retract a transaction, etc., and may respond by performing or facilitating the requested user or  merchant action. For example, the instruction may be received from a server associated with a merchant (e.g., a server hosting an application or website of the merchant) , with the instruction providing information respective of the transacting parties (e.g., the merchant’s information and the user’s information) and details of the transaction (e.g., the goods exchanged and the cost) , and the transaction processing system 106 subsequently processing the transaction, including collecting any needed further information (e.g., payment information from the user, a user login to the transaction processing system 106, and so on) . Similarly, such an instruction from a merchant may be received by the transaction processing system 106 from a subroutine or subprogram associated with a particular merchant executing in an environment operated by the transaction processing system 106 (e.g., a website or application of the transaction processing system 106) .
Accordingly, user activity as discussed herein may include transactions instructed through the transaction processing system 106, in some embodiments, and/or user activity on one or more platforms, networks, etc. Such transactions may include, for example, a computing transaction such as a file creation, a revision to a file, an electronic communication, a financial transaction (or component thereof) , a real-estate transaction (or component thereof) , a service request, or any other electronic transaction. Additionally or alternatively, user activity according to the present disclosure may be or may include an event associated with a user, such as a user navigation to a webpage, a user search request, etc.
The transaction processing system 106 may be associated with a particular electronic user interface and/or platform through which users and merchants perform electronic transactions (e.g., any of merchant-to-user transactions, user-to-user transactions, and merchant-to-merchant or other business-to-business transactions) . The electronic user interface may be embodied in a website, mobile application, etc. According, the transaction processing system 106 may be associated with or wholly or partially embodied in one or  more servers, which server (s) may host the interface, and through which the user computing devices 108 and merchants 128 may access the user interface.
The user computing devices 108 may be respectively associated with different user accounts. That is, user computing device 108a may be associated with a first user account, and user computing device 108b may be associated with a second user account. Where user computing devices are discussed herein, it may be assumed that different devices are associated with different user accounts for convenience of description, though of course a single user account may be accessed from multiple devices in practical use.
Similarly, different merchants 128 may be associated with different computing resources (e.g., different servers, different applications, different transacting locations, etc. ) . For example, merchant 128a may be based in Country A with its servers in Country B and its payment accounts and payment receipt accounts locations in Country C, whereas merchant 128b may be based in Country B with its servers in Country C and its payment accounts and payment receipt accounts locations in Country D.
FIG. 2 is a flow chart illustrating an example method 200 of detecting account takeover risk and performing a responsive action. The method 200, or one or more portions of the method 200, may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
The method 200 may include, at operation 202, receiving past transaction data. The past transaction data may include, for example, data from the prior transaction data source 104.
The method 200 may further include, at operation 204, determining, periodically, an account-based account takeover risk. Operation 204 may include, for example, determining account groupings, determining account metric values for a plurality of past time periods and a current time period, and determining outlier metric values for the current time  period, as discussed above with respect to the account grouping module 116, the metric calculation module 118, and the outlier detection module 120. A detailed example of operation 204 is described with respect to the method 300 of FIG. 3 below. Operation 204 may include determining account-based takeover risk, for example, on an hourly basis, a daily basis, a weekly basis, etc.
The method 200 may further include, at operation 206, determining, periodically, a transaction-based account takeover risk. Operation 206 may include, for example, determining transaction clusters and determining account takeover risk with respect to each cluster, as discussed above with respect to the transaction clustering module 122 and the cluster risk detection module 124. A detailed example of operation 206 is described with respect to the method 500 of FIG. 5 below. Operation 206 may include determining transaction-based takeover risk, for example, on an hourly basis, a daily basis, a weekly basis, etc.
The method may further include, at operation 208, responding, continuously, to an elevated risk of account takeover at either of operations 204, 206. For example, operation 208 may include notifying accounts at risk of takeover, and/or repositories of those accounts (e.g., a merchant or third party service provider, where the heightened risk is based on the association of the accounts with the merchant or third party service provider) according to operation 204 and responding to further transactions that match transaction profiles identified as high-risk at operation 206. Examples of operation 208 are described in both methods 300, 500 below.
Operations 204, 206 may be performed substantially in parallel, such that an electronic transaction ecosystem is monitored for account takeover risk on an account basis (operation 204) and on a transaction basis (operation 206) . The past transactions that are  used to establish baseline risk values may be the same, or may at least partially overlap, between operations 204 and 206.
FIG. 3 is a flow chart illustrating an example method 300 of determining, periodically, an account-based account takeover risk. The method 300 may be an embodiment of operations 204, 208 of the method 200 of FIG. 2. The method 300, or one or more portions of the method 300, may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
FIG. 4 is a diagrammatic view of a system 400 for performing the method 300 of FIG. 3. The method 300 will be described in conjunction with the system 400.
The method 300 includes, at operation 302, defining a plurality of account groupings for monitoring according to the characteristics of those accounts. Account groupings, or groups, may be determined according to one or both of account profile information (e.g., bibliographic information respective of the account) and account behaviors. Account groupings, or groups, may be determined based on account characteristics, or combinations or characteristics, that have been indicative of fraudulent behavior or takeover risk based on past transaction data, including geographic origin, age of account, transaction volume, time of day for transactions, and other account characteristics. Account groupings may be or may include groups of end user accounts with common characteristics (e.g., where a first grouping has a first set of common characteristics among the accounts in the group, a second grouping has a different second set of common characteristics, and so on) . In one example, a combined profile and behavior characteristic that may be considered is a mismatch between the country of origin of the account profile and the country from which an account was logged in, with such a mismatch being potentially indicative of fraudulent  behavior, especially if a given pattern (particular login country different from countries of many profiles) has a high volume in a given period of time.
Operation 302 may include assigning each of a plurality of accounts to a single respective grouping, in some embodiments. That is, each account may belong to only a single account group. In other embodiments, account groups may be defined such that a single account may belong to multiple groups. Further, in some embodiments, groups may be defined such that only a subset of all possible accounts (e.g., only a subset of the accounts of the transaction processing system 106) are in a defined group. In other embodiments, groups may be defined such that all possible accounts are collectively included in the defined groups. Based on operation 302, each account group may include a target set of accounts for monitoring.
The method 300 may further include, at operation 304, for each account group, receiving transaction data for a plurality of sub-periods of a recent past time period. The transaction data is shown as the transaction data source 104 in FIG. 4. The transaction data may include data respective of each of the account groups. The transaction data received at operation 304 may be respective of multiple time sub-periods within a larger time period. For example, the transaction data may be respective of each day (sub-periods) within a plurality of days (period) (e.g., 15 days, 30 days, 60 days, 90 days, 120 days, etc. ) , with each day being a respective discrete time sub-period. In another example, the transaction data may be respective of a plurality of weeks (e.g., 4 weeks, 8 weeks, 24 weeks, 52 weeks, etc. ) , with each week being a discrete time sub-period.
The method 300 may further include, at operation 306, calculating values of a plurality of risk metrics for each sub-period and for each account group. An example set of account metrics is shown in block 402 in FIG. 4, which shows Account Metric A, Account Metric B, ..., Account Metric P. Such metrics may include, for example, a number of users,  a total transaction volume, a quantity of retracted transactions (e.g., cancelled or disputed) , a total loss within the group (e.g., gross loss or net loss) , a quantity of loss that is addressed by existing actions or solutions, and/or other metrics. Each calculated metric value, for each sub-period and each account group, may then be stored in a repository of metric values, such as the historical metric values database 404 shown in FIG. 4. Referring to FIGS. 1 and 4, the historical metrics database 404 may be included in the ATO detection system 102 and/or the prior transaction data source 104, in embodiments.
Operation 306 may be performed on a periodic basis, e.g., for the most recent time sub-period at the end of that sub-period. Accordingly, the historical metrics database 404 may include a set of metric values for each account group for a large number of past time periods (e.g., more time periods than are considered for outlier detection, as discussed below) . In other embodiments, operation 306 may additionally include discarding metric values that are older than a threshold in order to conserve data storage space and improve database or other memory efficiency, whereby the historical metrics database 404 stores only the time duration of metrics calculations that are intended to be used for outlier detection, as discussed below.
In some embodiments, operation 306 may include calculating values for the same metrics for all account groups. In other embodiments, operation 306 may include calculating values for different metrics for different account groups. Because different groups may have different characteristics, or may be defined based on different behaviors or profile factors, different metrics may be more appropriate for one group than for another in assessing fraud risk.
The method 300 may further include, at operation 308, discarding a top-N highest values for each metric and each account group to generate comparison values for each combination of metric and account group. As a result of operation 308, a set of comparison  values may exist for each metric and each account group, and the set of comparison values may be used to determine whether a particular time period is an outlier with respect to one or more metrics, which may indicate a heightened risk of account takeover. Discarding a certain number of values at operation 308 may ensure that prior outliers are not used in setting baseline metric values. Accordingly, the number N of values that are discarded may be selected based on a specific quantity (e.g., five values) , based on a percentage of values (e.g., ten percent of values, twenty percent of values, fifty percent of values, etc. ) , based on a deviance (e.g., values that are two or more standard deviations from the mean value) , or based on some other selection. Discarding values may include, in some embodiments, ignoring the discarded values for outlier determination, through the discarded values may remain stored in the historical metrics database 404.
The method 300 may further include, at operation 310, for each account group, receiving present transaction data. The present transaction data may be transaction data for all accounts in an account group for a most recent time sub-period (e.g., the most recent day, the most recent week, etc. ) . The present transaction data may include, for example, details of parties (e.g., merchants) with which each account has transacted, the goods or services exchanged in those transactions, the payment types used in those transactions, geographic locations associated with those transactions (e.g., of the computing devices of the transacting parties, of the “home” location of the transacting user (s) and/or merchants) , a quantity of transactions engaged in by each account, and/or any other data that may be associated with a transaction entered into by a party consistent with this disclosure.
The method 300 may further include, at operation 312, calculating values of each risk metric based on the present transaction data for each account group. Operation 312 may include calculating metric values for the same groups and the same metrics as were determined at operation 306, but for the most recent time sub-period.
Operations 310 and 312 may be performed periodically, e.g., at the end of every sub-period. For example, operations 310 and 312 may be performed at the end of every hour, every day, every week, etc.
In some embodiments, operations 304, 306 may be collectively performed over time through repeated performance of operations 310, 312.
The method 300 may further include, at operation 314, determining, for each account group, if the present transaction data metric values deviate from the comparison metric values (e.g., if the present metric values are outliers) , as shown at block 406 in FIG. 4. Operation 314 may include for example, for each group and each metric, given the most recent metric value (d0) and previous 90-days (or other appropriate number of sub-periods) of historical metric values (d-1, …, d-90) , the most recent time period metric value (d0) may be compared to an adaptive average of historical metrics values in the 90-day time window (A-AVG (d-1, d-90) ) to estimate a deviation value according to equation (1) below:
deviation=d0-AVG (d-1,d-90)       (Eq. 1)
As used above and below, “adaptive” standard deviation and “adaptive” average refer to calculations performed after the top-N values for the metric are discarded at operation .
Operation 314 may further include, for each account group and each metric, normalizing the deviation amount with the adaptive standard deviation of historical metrics in a 90-day time window (A-STD (d-1, d-90) ) . If the EG ratio (EGSTD) , calculated according to equation (2) below, is more than a predetermined threshold (e.g., 500) , the most recent time period metric may be considered an outlier:
The method 300 may further include, at operation 316 and block 408, notifying users associated with the accounts in accounts groups, or notifying the hosts of such accounts, that deviate from comparison metric values. Such a notification may be or may include, for example, a notification email, a notification in an application associated with the transaction processing system 106, a notification text, etc. The notification may prompt a user, for example, to change their password or review their transaction history for fraudulent transactions. If sent to an account host, the account host may lock high-risk accounts, prompt the users of high-risk accounts to change their passwords, etc.
The system 400 may provide output in addition to notifications, in some embodiments. For example, the system 400 may include a visualization user interface 410 in which metric values and other calculations of the method 300 may be displayed for a user. Referring to FIG. 1, the visualization interface may be hosted by the ATO detection system 106 and may be accessible to the proprietor of the transaction processing system 106 and its employee and contractor users that maintain the transaction processing system 106, rather than to the end users of the transaction processing system 106. Accordingly, the visualization interface 410 may be used by the transaction processing system 106 to monitor ATO trends and more granular data to, for example, manually intervene as to one or more account groups, to alter, add, or remove account group definitions, etc.
The method 300 and system 400 may be employed to provide periodic monitoring of accounts for account takeover risk. For example, accounts may be grouped, and then each group may be assessed independently on a day-by-day, week-by-week, or other basis to determine risk levels. Where a particular account group is determined to be at a heightened risk (because a metric value respective of that group is an outlier for the most recent day, week, etc. ) , corrective action may be taken automatically with respect to each account in that particular group. As noted above, the corrective action may be a notification. Accounts that  are not grouped, or that are not in the group determined to be at heightened risk, may not be subject to the corrective action.
The method 300 and system 400 may provide improved account-based fraud detection that may be broadly applicable to accounts across a variety of account hosts, rather than only to the entity from which relevant transactions originated. Furthermore, by outputting a notification when a heightened risk of fraud is detected, the method 300 and system 400 may enable more tailored responses to heightened fraud risk depending on the host type, as an appropriate responsive action may be different where a transaction processing system is an account host than where a merchant is an account host. Accordingly, the method 300 and system 400 may improve the technical field of fraud detection by applying the
FIG. 5 is a flow chart illustrating an example method 500 of determining, periodically, transaction-based account takeover risk. The method 500 may be an embodiment of operations 206, 208 of the method 200 of FIG. 2. The method 500, or one or more portions of the method 500, may be performed by the ATO detection system 102 in conjunction with the transaction processing system 106, and thus may be computer-implemented.
FIG. 6 is a diagrammatic view of a system 600 for performing the method 500 of FIG. 5. The method 500 will be described in conjunction with the system 600.
The method 500 may include two general portions: an offline, periodic portion (indicated by dashed box 550 in FIG. 5 and dashed box 650 in FIG. 6) and an online, continuous portion (indicated by dashed box 560 in FIG. 5 and dashed box 660 in FIG. 6) . The offline, period portion 550, 650 may be performed, for example, at the end of a relevant monitoring period in order to define characteristics of transactions that are classified as high-risk. The online, continuous portion 560, 660 may then apply the most recently-defined risky  transaction characteristics to detect risky transactions on a continuous basis as those transactions are instructed, and to respond appropriately.
The method 500 may include, at operation 502, receiving past transaction data (shown as prior transaction data 104 in FIG. 6) . The past transaction data may include, for example, data respective of all transactions performed within a certain service, domain, processor, etc. (e.g., through the transaction processing system 106) within a certain period of time. For example, all transactions for a most recent day, two days, three days, one week, two weeks, etc. may be received.
The method 500 may further include, at operation 504, clustering past transactions according to transaction characteristics. Characteristics of transactions that may be considered include, for example, a flow or sequence of interactions leading to the transaction (e.g., a user activity flow immediately before the transaction) , a flow through which the account logged in before the transaction (e.g., a user login flow of a user that initiated the computing action) , account login channel (e.g., an access channel of a computing system that initiated the transaction) , and location (e.g., account-holder residence country, geographic origin of transaction instruction, etc. ) . In some embodiments, different combinations of values of these and/or other transaction characteristics may define transaction clusters. A plurality of clusters may be defined at operation 504. FIG. 6 illustrates an example plurality of clusters 602-cluster 1, cluster 2, ..., cluster N.
The method 500 may further include, at operation 506, calculating a risk of an account takeover for each transaction cluster based on the past transactions. The risk may be calculated, in some embodiments, by calculating one or more metrics for each transaction cluster using those metric values as input to an objective function, and comparing the value of that objective function to a threshold, as discussed in detail below.
Operation 506 may include calculating metric values for each transaction cluster, where the metrics may include, for example, a quantity of transactions, a transaction volume (e.g., in terms of goods exchanged, files exchanged, currency exchanged, etc. ) , a quantity of retracted transactions (QT) , a rate (e.g., percentage) of retracted transactions (RT) , and/or one or more other metrics. Such transaction-focused metrics are shown at block 604 in FIG. 6 as Tx metric A, Tx metric B, ... Tx metric M.
The values of one or more of the metrics may be input into an objective function, such that the objective function will have a value respective of each transaction cluster. An example objective function obj is set forth in equation (3) below:
obj=ln (RT) *Qr      (Eq. 3)
Although the objective function of equation (3) incorporates the retracted transaction quantity and retracted transaction rate, in other embodiments, different metrics may be included in the objective function if data (e.g., data respective of past account takeovers and associated transactions) indicates that those other metrics correlate with fraudulent activity. Notably, by incorporated both a quantity and a rate, the objective function of equation (3) , false positives can be avoided due to a high retracted transaction rate at low volume, or due to a large retracted transaction quantity but where that large quantity came in the context of a massive sample size.
The value of the objective function, as to each cluster, may be compared to an objective function value threshold, and any transaction cluster having an objective function value above the objective value threshold may be classified as high-risk.
In some embodiments, one of more filters may be applied to clusters before applying the objective function to clusters, or before classifying clusters as high-risk. For example, only clusters that have at least a threshold quantity of retracted transactions, and/or  at least a threshold rate of retracted transactions, may be considered high risk. Accordingly, in some embodiments, operation 506 may include comparing the rate of retracted transactions of each cluster to a retracted transaction rate threshold, comparing the quantity of retracted transactions to a retracted transaction quantity threshold, and/or comparing one or more other metrics to appropriate thresholds, and including only clusters that exceed such thresholds in consideration for high-risk clusters. Referring to FIG. 6, operation 506 may result in a set 606 of high-risk cluster definitions.
The method 500 may further include, at operation 508, assigning automated actions to transaction clusters (e.g., to future transactions matching the cluster’s characteristics) having a risk above a threshold, i.e., the clusters designated as high-risk at operation 506. Such automated actions may include, for example, declining a transaction or requiring a second authentication factor before processing a transaction. Operation 508, in conjunction with operation 506, may further include uploading the high-risk cluster definitions (e.g., the characteristics of each high-risk cluster) and the associated automated action to a database (shown as RADD ( “Risk Analytics Dynamic Dataset” ) 608 in FIG. 6) . The database 608 may be updated on a periodic basis for going-forward application. For example, in some embodiments, operations 502, 504, 506, 508 may be repeated periodically, with each repetition using a different time period, so that the database 608 is updated for application to the next period’s transactions. In embodiments, the database 608 may be updated on a daily basis, a weekly basis, one a monthly basis, etc. In some embodiments, operations 502, 504, 506, 508 may be performed in a batch process at the end of the time period, or during a sub-period within the time period when the computing resources of the relevant system are less strained. For example, where the method 500 is performed by the ATO detection system 102, which may operation in conjunction with the transaction processing system 106, the operations 502, 504, 506, 508 may be performed at a time of day  in which the transaction processing system 106 regularly experiences lower transaction volume.
The cluster definitions and automatic actions stored in the RADD 608 may be considered a set of auto-action rules for responding to transactions. By periodically generating simple, easy to apply and easy to execute rules and associating appropriate actions based on recent fraud trends and transaction activity indicative of potential fraud, the method 500 and system 600 provide an approach for effectively combating fraud in a computationally-efficient way that improves the functioning of anti-fraud computing systems. For example, by processing large quantities of transactions in an offline manner to generate simple rules, the real-time processing load for fraud detection is relatively low and can be executed with relatively little processing demand on a per-transaction basis, enabling faster execution.
The method 500 may further include, at operation 510, receiving a transaction request for a transaction matching the characteristics of a high-risk transaction cluster. The transaction request may be one that is received and on which a decision must be made substantially in real time, where the decision is whether to approve the transaction and other details of how to process the transaction. The transaction request may be received by, or from, the transaction processing system 106. Operation 510 may include comparing characteristic of the received transaction request to characteristics of transaction clusters stored in the database 608 and concluding that the received transaction matches one of the high-risk transaction clusters based on matching characteristics of the transaction and the cluster.
The method 500 may further include, at operation 512, applying the assigned automated action in response to the transaction request that matches the characteristics of the high-risk transaction cluster, as shown at block 610 of FIG. 6. Operation 512 may include  retrieving the assigned automated action from the database 608 and applying the assigned automated action in response to the transaction request.
Operations 510, 512 may be performed on a continuous basis. Accordingly, in some embodiments, the method 500 may include receiving all new transaction requests as those requests are made and comparing the transaction requests to stored high-risk transaction cluster definitions (operation 510) and, for each transaction that matches a high-risk cluster, applying the stored responsive action associated with that cluster automatically (block 512) .
As shown in FIG. 6, the automated rules, and their responsive actions, may be made available to and applied to transactions occurring through or with a variety of sources. For example, the automated rules and responsive actions may be used in connection with a transaction processing system 106. Additionally or alternatively, the automated rules and responsive actions may be made available to payment processors 612 and/or merchants 128, for those payment processors 612 and/or merchants 128 to check transactions in which they may engage against the risky transaction profiles and take appropriate responsive action.
FIG. 7 is a block diagram of an example computing system, such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium. Furthermore, while described and illustrated in the context of a single computing system 700, those skilled in the art will also appreciate that the various tasks described hereinafter may be practiced in a distributed environment having multiple computing systems 700 linked via a local or wide-area network in which the executable instructions may be associated with and/or executed by one or more of multiple computing systems 700.
In its most basic configuration, computing system environment 700 typically includes at least one processing unit 702 and at least one memory 704, which may be linked via a bus 706. Depending on the exact configuration and type of computing system  environment, memory 704 may be volatile (such as RAM 710) , non-volatile (such as ROM 708, flash memory, etc. ) or some combination of the two. Computing system environment 700 may have additional features and/or functionality. For example, computing system environment 700 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks, tape drives and/or flash drives. Such additional memory devices may be made accessible to the computing system environment 700 by means of, for example, a hard disk drive interface 712, a magnetic disk drive interface 714, and/or an optical disk drive interface 716. As will be understood, these devices, which would be linked to the system bus 706, respectively, allow for reading from and writing to a hard disk 718, reading from or writing to a removable magnetic disk 720, and/or for reading from or writing to a removable optical disk 722, such as a CD/DVD ROM or other optical media. The drive interfaces and their associated computer-readable media allow for the nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing system environment 700. Those skilled in the art will further appreciate that other types of computer readable media that can store data may be used for this same purpose. Examples of such media devices include, but are not limited to, magnetic cassettes, flash memory cards, digital videodisks, Bernoulli cartridges, random access memories, nano-drives, memory sticks, other read/write and/or read-only memories and/or any other method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Any such computer storage media may be part of computing system environment 700.
A number of program modules may be stored in one or more of the memory/media devices. For example, a basic input/output system (BIOS) 724, containing the basic routines that help to transfer information between elements within the computing system environment 700, such as during start-up, may be stored in ROM 708. Similarly, RAM 710, hard drive  718, and/or peripheral memory devices may be used to store computer executable instructions comprising an operating system 726, one or more applications programs 728, other program modules 730, and/or program data 732. Still further, computer-executable instructions may be downloaded to the computing environment 700 as needed, for example, via a network connection. The applications programs 728 may include, for example, a browser, including a particular browser application and version, which browser application and version may be relevant to determinations of correspondence between communications and user URL requests, as described herein. Similarly, the operating system 726 and its version may be relevant to determinations of correspondence between communications and user URL requests, as described herein.
An end-user may enter commands and information into the computing system environment 700 through input devices such as a keyboard 734 and/or a pointing device 736. While not illustrated, other input devices may include a microphone, a joystick, a game pad, a scanner, etc. These and other input devices would typically be connected to the processing unit 702 by means of a peripheral interface 738 which, in turn, would be coupled to bus 706. Input devices may be directly or indirectly connected to processor 702 via interfaces such as, for example, a parallel port, game port, firewire, or a universal serial bus (USB) . To view information from the computing system environment 700, a monitor 740 or other type of display device may also be connected to bus 706 via an interface, such as via video adapter 733. In addition to the monitor 740, the computing system environment 700 may also include other peripheral output devices, not shown, such as speakers and printers.
The computing system environment 700 may also utilize logical connections to one or more computing system environments. Communications between the computing system environment 700 and the remote computing system environment may be exchanged via a further processing device, such a network router 748, that is responsible for network  routing. Communications with the network router 748 may be performed via a network interface component 744. Thus, within such a networked environment, e.g., the Internet, World Wide Web, LAN, or other like type of wired or wireless network, it will be appreciated that program modules depicted relative to the computing system environment 700, or portions thereof, may be stored in the memory storage device (s) of the computing system environment 700.
The computing system environment 700 may also include localization hardware 746 for determining a location of the computing system environment 700. In embodiments, the localization hardware 746 may include, for example only, a GPS antenna, an RFID chip or reader, a WiFi antenna, or other computing hardware that may be used to capture or transmit signals that may be used to determine the location of the computing system environment 700. Data from the localization hardware 746 may be included in a callback request or other user computing device metadata in the methods of this disclosure.
The computing system, or one or more portions thereof, may embody a user computing device 108, in some embodiments. Additionally or alternatively, some components of the computing system 700 may embody the ATO detection system 102 and/or transaction processing system 106. For example, the functional modules 116, 118, 120, 122, 124, 126 may be embodied as program modules 730.
In a first aspect of the present disclosure, a computer-implemented method is provided. The method includes (i) receiving, by a computing system, data respective of a plurality of computing actions for a time period, (ii) categorizing, by the computing system, each of the plurality of computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values of a plurality of characteristics of the computing actions, (iii) calculating, by the computing system, for each of the plurality of clusters, a respective risk for the time period, (iv) determining, by the  computing system, for one or more clusters of the plurality of clusters, that the respective risk exceeds a threshold, and (v) in response to (iv) , by the computing system, automatically performing a fraud prevention action for further computing actions having the respective combination of respective values of characteristics associated with the one or more clusters, wherein (i) , (ii) , (iii) , and (iv) are performed periodically in a batch process, and (v) is performed continuously according to a most recent periodic performance of (i) , (ii) , (iii) , and (iv) .
In an embodiment of the first aspect, the plurality of characteristics comprises two or more of a geographic origin of the computing action, an access channel of a computing system that initiated the computing action, a user login flow of a user that initiated the computing action, or a user activity flow immediately before the computing action.
In an embodiment of the first aspect, the fraud prevention action comprises one or more of requiring a second authentication factor of a user the further computing action, or declining the further computing action.
In an embodiment of the first aspect, repeating (i) - (iv) periodically includes each repetition using a different respective time period in (i) .
In an embodiment of the first aspect, calculating the respective risk for the time period comprises determining a respective rate of retracted computing actions for the time period.
In an embodiment of the first aspect, (v) comprises, for a first one of the further computing actions, requiring a second authentication factor of a user in the first further computing action, and for a second one of the further computing actions, declining the second further computing action.
In a second aspect of the present disclosure, a computer-implemented method of detecting an account takeover associated with computing actions is provided. The method  includes (i) receiving, by a computing system, data respective of a first plurality of computing actions for a first time period, the first time period comprising a plurality of sub-periods, (ii) receiving, by the computing system, data respective of a second plurality of computing actions for a second time period, the second time period different from the first time period, (iii) calculating, by the computing system, a plurality of metrics for each of the sub-periods to generate, for each of the metrics, a respective plurality of first metric values, (iv) discarding, by the computing system, for each of the metrics, a set of highest metric values to generate, for each of the metrics, a respective plurality of comparison values, (v) calculating, by the computing system, the plurality of metrics for the second time period to generate, for each of the metrics, a respective second metric value, (vi) determining that at least one of the second metric values is an outlier with respect to the comparison values, wherein the outlier determination is indicative of a computing action account takeover, and (vii) in response to (vi) , transmitting an account takeover notification.
In an embodiment of the second aspect, (i) – (vi) are performed with respect to a target set of computing action accounts, and (vii) comprises transmitting an outlier notification to a respective user associated with each computing action account in the set of computing action accounts.
In an embodiment of the second aspect, the method further includes defining a plurality of target sets of computing action accounts, wherein (i) – (vi) are performed with respect to each target set of computing action accounts, and wherein the plurality of metrics comprises a first plurality of metrics for a first set of the plurality of target sets of computing action accounts and a second plurality of metrics for a second set of the plurality of target sets of computing action accounts, wherein the first plurality of metrics is different from the second plurality of metrics.
In an embodiment of the second aspect, determining that at least one of the second metric values is an outlier with respect to the comparison values includes determining, for each of the plurality of metrics, a respective average of the respective comparison values for the metric, calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric, and determining that at least one of the deviations exceeds a predetermined threshold. In a further embodiment of the second aspect, the method further includes determining, for each of the plurality of metrics, a respective standard deviation of the respective comparison values for the metric, normalizing, for each of the plurality of metrics, the deviation of the second metric value by the respective standard deviation of the metric to calculate a normalized deviation of the second metric value, and determining that at least one of the normalized deviations exceeds a predetermined threshold.
In an embodiment of the second aspect, one or more of the plurality of sub-periods are of equal duration to each other, or the second time period is of equal duration to at least one of the sub-periods.
In an embodiment of the second aspect, the plurality of metrics comprise one or more of a number of users, a total computing action volume, a disputed computing action volume, or a total loss.
In a third aspect of the present disclosure, a computer-implemented method of preventing fraudulent computing action activity is provided. The method includes (i) assigning, by a computing system, an account of a user to a target set of computing action accounts, (ii) determining, by the computing system, first respective values for a plurality of metrics for first past computing actions of the target set of computing action accounts, (iii) determining, by the computing system, second respective values for the plurality of metrics for present computing actions of the target set of computing action accounts, (iv) determining, by the computing system, for at least one of the metrics, that the second respective value is an  outlier with respect to the first respective values and, in response, transmitting an outlier notification to the user, (v) receiving, by the computing system, data respective of second past computing actions, the second past computing actions being for a time period, (vi) categorizing, by the computing system, each of the second past computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values of a plurality of characteristics of the second past computing actions, (vii) calculating, by the computing system, for each of the plurality of clusters, a respective risk for the time period, (viii) determining, by the computing system, for one or more clusters of the plurality of clusters, that the respective risk exceeds a threshold and, in response, assigning, by the computing system, a fraud prevention action to future computing actions having the respective combination of respective values of characteristics associated with the one or more clusters, (ix) receiving, by the computing system, a computing action request involving the account of the user, the computing action request comprising respective values for a plurality of computing action characteristics, wherein the respective values for the plurality of computing action characteristics of the computing action request matches the combination of characteristic values for one of the one or more clusters, (x) determining, by the computing system, that the respective values for the plurality of computing action characteristics matches a combination of characteristic values for one of the clusters determined in (viii) , and (xi) in response to (ix) and (x) , executing, by the computing system, a fraud prevention action with respect to the computing action request.
In an embodiment of the third aspect, the method further includes receiving, by the computing system, data respective of the first past computing actions, the first past computing actions for a first time period, the first time period comprising a plurality of sub-periods, and receiving, by the computing system, data respective of the present computing actions, the present computing actions for a second time period, the second time period  different from the first time period, wherein determining the first respective values for the plurality of metrics includes calculating values for the plurality of metrics for each of the sub-periods to generate an initial value set, and discarding, by the computing system, for each of the metrics, a set of highest metric values in the initial set to generate, for each of the metrics, the first respective values for the plurality of metrics.
In an embodiment of the third aspect, the plurality of metrics include one or more of a number of users, a total computing action volume, a disputed computing action volume, or a total loss.
In an embodiment of the third aspect, determining that at least one of the second metric values is an outlier with respect to the first metric values includes determining, for each of the plurality of metrics, a respective average of the first respective values for the metric, calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric, and determining that at least one of the deviations exceeds a predetermined threshold.
In an embodiment of the third aspect, the fraud prevention action includes requiring a second authentication factor of the user, or declining the computing action associated with the computing action request.
In an embodiment of the third aspect, the plurality of characteristics comprises two or more of a geographic origin of the computing action, an access channel of a computing system that initiated the computing action, a user login flow of a user that initiated the computing action, or a user activity flow immediately before the computing action.
In an embodiment of the third aspect, calculating the risk comprises determining a rate of retracted computing actions.
While this disclosure has described certain embodiments, it will be understood that the claims are not intended to be limited to these embodiments except as explicitly recited in  the claims. On the contrary, the instant disclosure is intended to cover alternatives, modifications and equivalents, which may be included within the spirit and scope of the disclosure. Furthermore, in the detailed description of the present disclosure, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, it will be obvious to one of ordinary skill in the art that systems and methods consistent with this disclosure may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure various aspects of the present disclosure.
Some portions of the detailed descriptions of this disclosure have been presented in terms of procedures, logic blocks, processing, and other symbolic representations of operations on data bits within a computer or digital system memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. A procedure, logic block, process, etc., is herein, and generally, conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these physical manipulations take the form of electrical or magnetic data capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system or similar electronic computing device. For reasons of convenience, and with reference to common usage, such data is referred to as bits, values, elements, symbols, characters, terms, numbers, or the like, with reference to various presently disclosed embodiments. It should be borne in mind, however, that these terms are to be interpreted as referencing physical manipulations and quantities and are merely convenient labels that should be interpreted further in view of terms commonly used in the art. Unless specifically stated otherwise, as apparent from the discussion herein, it is understood that throughout discussions of the present embodiment,  discussions utilizing terms such as “determining” or “outputting” or “transmitting” or “recording” or “locating” or “storing” or “displaying” or “receiving” or “recognizing” or “utilizing” or “generating” or “providing” or “accessing” or “checking” or “notifying” or “delivering” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data. The data is represented as physical (electronic) quantities within the computer system’s registers and memories and is transformed into other data similarly represented as physical quantities within the computer system memories or registers, or other such information storage, transmission, or display devices as described herein or otherwise understood to one of ordinary skill in the art.

Claims (20)

  1. A computer-implemented method the method comprising:
    (i) receiving, by a computing system, data respective of a plurality of computing actions for a time period;
    (ii) categorizing, by the computing system, each of the plurality of computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values of a plurality of characteristics of the computing actions;
    (iii) calculating, by the computing system, for each of the plurality of clusters, a respective risk for the time period;
    (iv) determining, by the computing system, for one or more clusters of the plurality of clusters, that the respective risk exceeds a threshold; and
    (v) in response to (iv) , by the computing system, automatically performing a fraud prevention action for further computing actions having the respective combination of respective values of characteristics associated with the one or more clusters;
    wherein (i) , (ii) , (iii) , and (iv) are performed periodically in a batch process, and (v) is performed continuously according to a most recent periodic performance of (i) , (ii) , (iii) , and (iv) .
  2. The method of claim 1, wherein the plurality of characteristics comprises two or more of:
    a geographic origin of the computing action;
    an access channel of a computing system that initiated the computing action;
    a user login flow of a user that initiated the computing action; or
    a user activity flow immediately before the computing action.
  3. The method of claim 1, wherein the fraud prevention action comprises one or more of:
    requiring a second authentication factor of a user the further computing action; or
    declining the further computing action.
  4. The method of claim 1, wherein repeating (i) - (iv) periodically includes each repetition using a different respective time period in (i) .
  5. The method of claim 1, wherein calculating the respective risk for the time period comprises determining a respective rate of retracted computing actions for the time period.
  6. The method of claim 1, wherein (v) comprises:
    for a first one of the further computing actions, requiring a second authentication factor of a user in the first further computing action; and
    for a second one of the further computing actions, declining the second further computing action.
  7. A computer-implemented method of detecting an account takeover associated with computing actions, the method comprising:
    (i) receiving, by a computing system, data respective of a first plurality of computing actions for a first time period, the first time period comprising a plurality of sub-periods;
    (ii) receiving, by the computing system, data respective of a second plurality of computing actions for a second time period, the second time period different from the first time period;
    (iii) calculating, by the computing system, a plurality of metrics for each of the sub-periods to generate, for each of the metrics, a respective plurality of first metric values;
    (iv) discarding, by the computing system, for each of the metrics, a set of highest metric values to generate, for each of the metrics, a respective plurality of comparison values;
    (v) calculating, by the computing system, the plurality of metrics for the second time period to generate, for each of the metrics, a respective second metric value;
    (vi) determining that at least one of the second metric values is an outlier with respect to the comparison values, wherein the outlier determination is indicative of a computing action account takeover; and
    (vii) in response to (vi) , transmitting an account takeover notification.
  8. The method of claim 7,
    wherein (i) – (vi) are performed with respect to a target set of computing action accounts; and
    wherein (vii) comprises transmitting an outlier notification to a respective user associated with each computing action account in the set of computing action accounts.
  9. The method of claim 7, further comprising:
    defining a plurality of target sets of computing action accounts;
    wherein (i) – (vi) are performed with respect to each target set of computing action accounts; and
    wherein the plurality of metrics comprises a first plurality of metrics for a first set of the plurality of target sets of computing action accounts and a second plurality of metrics for a second set of the plurality of target sets of computing action accounts, wherein the first plurality of metrics is different from the second plurality of metrics.
  10. The method of claim 7, wherein determining that at least one of the second metric values is an outlier with respect to the comparison values comprises:
    determining, for each of the plurality of metrics, a respective average of the respective comparison values for the metric;
    calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric; and
    determining that at least one of the deviations exceeds a predetermined threshold.
  11. The method of claim 10, further comprising:
    determining, for each of the plurality of metrics, a respective standard deviation of the respective comparison values for the metric;
    normalizing, for each of the plurality of metrics, the deviation of the second metric value by the respective standard deviation of the metric to calculate a normalized deviation of the second metric value; and
    determining that at least one of the normalized deviations exceeds a predetermined threshold.
  12. The method of claim 7, wherein one or more of:
    the plurality of sub-periods are of equal duration to each other; or
    the second time period is of equal duration to at least one of the sub-periods.
  13. The method of claim 7, wherein the plurality of metrics comprise one or more of:
    a number of users;
    a total computing action volume;
    a disputed computing action volume; or
    a total loss.
  14. A computer-implemented method of preventing fraudulent computing action activity, the method comprising:
    (i) assigning, by a computing system, an account of a user to a target set of computing action accounts;
    (ii) determining, by the computing system, first respective values for a plurality of metrics for first past computing actions of the target set of computing action accounts;
    (iii) determining, by the computing system, second respective values for the plurality of metrics for present computing actions of the target set of computing action accounts;
    (iv) determining, by the computing system, for at least one of the metrics, that the second respective value is an outlier with respect to the first respective values and, in response, transmitting an outlier notification to the user;
    (v) receiving, by the computing system, data respective of second past computing actions, the second past computing actions being for a time period;
    (vi) categorizing, by the computing system, each of the second past computing actions into a respective one of a plurality of clusters, each cluster defined by a respective combination of respective values of a plurality of characteristics of the second past computing actions;
    (vii) calculating, by the computing system, for each of the plurality of clusters, a respective risk for the time period;
    (viii) determining, by the computing system, for one or more clusters of the plurality of clusters, that the respective risk exceeds a threshold and, in response, assigning, by the computing system, a fraud prevention action to future computing actions having the respective combination of respective values of characteristics associated with the one or more clusters;
    (ix) receiving, by the computing system, a computing action request involving the account of the user, the computing action request comprising respective values for a plurality of computing action characteristics, wherein the respective values for the plurality of computing action characteristics of the computing action request matches the combination of characteristic values for one of the one or more clusters;
    (x) determining, by the computing system, that the respective values for the plurality of computing action characteristics matches a combination of characteristic values for one of the clusters determined in (viii) ; and
    (xi) in response to (ix) and (x) , executing, by the computing system, a fraud prevention action with respect to the computing action request.
  15. The method of claim 14, further comprising:
    receiving, by the computing system, data respective of the first past computing actions, the first past computing actions for a first time period, the first time period comprising a plurality of sub-periods; and
    receiving, by the computing system, data respective of the present computing actions, the present computing actions for a second time period, the second time period different from the first time period;
    wherein determining the first respective values for the plurality of metrics comprises:
    calculating values for the plurality of metrics for each of the sub-periods to generate an initial value set; and
    discarding, by the computing system, for each of the metrics, a set of highest metric values in the initial set to generate, for each of the metrics, the first respective values for the plurality of metrics.
  16. The method of claim 14, wherein the plurality of metrics comprise one or more of:
    a number of users;
    a total computing action volume;
    a disputed computing action volume; or
    a total loss.
  17. The method of claim 14, wherein determining that at least one of the second metric values is an outlier with respect to the first metric values comprises:
    determining, for each of the plurality of metrics, a respective average of the first respective values for the metric;
    calculating, for each of the plurality of metrics, a deviation of the second metric value from the average of the metric; and
    determining that at least one of the deviations exceeds a predetermined threshold.
  18. The method of claim 14, wherein the fraud prevention action comprises:
    requiring a second authentication factor of the user; or
    declining the computing action associated with the computing action request.
  19. The method of claim 14, wherein the plurality of characteristics comprises two or more of:
    a geographic origin of the computing action;
    an access channel of a computing system that initiated the computing action;
    a user login flow of a user that initiated the computing action; or
    a user activity flow immediately before the computing action.
  20. The method of claim 14, wherein calculating the risk comprises determining a rate of retracted computing actions.
PCT/CN2023/136170 2023-12-04 2023-12-04 Automated account takeover detection and prevention Pending WO2025118110A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/136170 WO2025118110A1 (en) 2023-12-04 2023-12-04 Automated account takeover detection and prevention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2023/136170 WO2025118110A1 (en) 2023-12-04 2023-12-04 Automated account takeover detection and prevention

Publications (1)

Publication Number Publication Date
WO2025118110A1 true WO2025118110A1 (en) 2025-06-12

Family

ID=95981285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/136170 Pending WO2025118110A1 (en) 2023-12-04 2023-12-04 Automated account takeover detection and prevention

Country Status (1)

Country Link
WO (1) WO2025118110A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813355A (en) * 2012-08-27 2015-07-29 Y-S·宋 Transaction Monitoring System
CN107810504A (en) * 2015-06-15 2018-03-16 赛门铁克公司 The system and method that malicious downloading risk is determined based on user behavior
US10867303B1 (en) * 2017-10-18 2020-12-15 Stripe, Inc. Systems, methods, and apparatuses for implementing user customizable risk management tools with statistical modeling and recommendation engine
CN114930368A (en) * 2020-11-17 2022-08-19 维萨国际服务协会 Systems, methods, and computer program products for determining fraud

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104813355A (en) * 2012-08-27 2015-07-29 Y-S·宋 Transaction Monitoring System
CN107810504A (en) * 2015-06-15 2018-03-16 赛门铁克公司 The system and method that malicious downloading risk is determined based on user behavior
US10867303B1 (en) * 2017-10-18 2020-12-15 Stripe, Inc. Systems, methods, and apparatuses for implementing user customizable risk management tools with statistical modeling and recommendation engine
CN114930368A (en) * 2020-11-17 2022-08-19 维萨国际服务协会 Systems, methods, and computer program products for determining fraud

Similar Documents

Publication Publication Date Title
US12051072B1 (en) Fraud detection
US11023963B2 (en) Detection of compromise of merchants, ATMs, and networks
US11153349B2 (en) Inferential analysis using feedback for extracting and combining cyber risk information
US20210073819A1 (en) Systems for detecting application, database, and system anomalies
AU2023206104A1 (en) Network-based automated prediction modeling
US10218736B2 (en) Cyber vulnerability scan analyses with actionable feedback
US20200106801A1 (en) Digital asset based cyber risk algorithmic engine, integrated cyber risk methodology and automated cyber risk management system
US9348896B2 (en) Dynamic network analytics system
US9231979B2 (en) Rule optimization for classification and detection
Edge et al. A survey of signature based methods for financial fraud detection
US20170024828A1 (en) Systems and methods for identifying information related to payment card testing
US20170093904A1 (en) Inferential Analysis Using Feedback for Extracting and Combining Cyber Risk Information Including Proxy Connection Analyses
JP2020522832A (en) System and method for issuing a loan to a consumer determined to be creditworthy
US11037160B1 (en) Systems and methods for preemptive fraud alerts
US20160196615A1 (en) Cross-channel fraud detection
US10475033B2 (en) Methods and systems for transactional risk management
CN110214322A (en) System and method for protecting the access to resource
US11734069B2 (en) Systems and methods for maintaining pooled time-dependent resources in a multilateral distributed register
US20230012460A1 (en) Fraud Detection and Prevention System
US12355820B2 (en) Inferential analysis using feedback for extracting and combining cyber risk information
US20230281687A1 (en) Real-time fraud detection based on device fingerprinting
US20220300977A1 (en) Real-time malicious activity detection using non-transaction data
WO2025118110A1 (en) Automated account takeover detection and prevention
US20230245128A1 (en) Detecting digital harvesting utilizing a dynamic transaction request fraud detection model

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23960425

Country of ref document: EP

Kind code of ref document: A1