WO2024226361A1 - Alert fusion for extended detection and response to security anomalies - Google Patents
Alert fusion for extended detection and response to security anomalies Download PDFInfo
- Publication number
- WO2024226361A1 WO2024226361A1 PCT/US2024/024887 US2024024887W WO2024226361A1 WO 2024226361 A1 WO2024226361 A1 WO 2024226361A1 US 2024024887 W US2024024887 W US 2024024887W WO 2024226361 A1 WO2024226361 A1 WO 2024226361A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- analyst
- multiple different
- work units
- anomalies
- grouping
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
- H04L63/1425—Traffic logging, e.g. anomaly detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1408—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/14—Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
- H04L63/1433—Vulnerability analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L63/00—Network architectures or network communication protocols for network security
- H04L63/20—Network architectures or network communication protocols for network security for managing network security; network security policies in general
Definitions
- the present disclosure relates generally to computer and network security, and to threat detection, analysis, and alerts in particular.
- FIG. 1 illustrates an example overview of techniques according to this disclosure, including anomaly detection in a netw ork, anomaly data enhancement, multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 2 illustrates example anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
- FIG. 3 illustrates a detailed example of the anomaly data enhancement introduced in FIG. 2, in accordance w ith various aspects of the technologies disclosed herein.
- FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 5 illustrates example analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein.
- FIG. 6 illustrates an example computer hardware architecture that can implement the teclmiques disclosed herein, in accordance with various aspects of the technologies disclosed herein.
- FIG. 7 is a flow diagram that illustrates an example method performed by a computing device in connection with anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
- FIG. 8 is a flow diagram that illustrates an example method performed by a computing device in connection with multi-stage grouping to generate analyst work units, analyst work unit data prioritization and presentation, and security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 9 is a flow diagram that illustrates an example method performed by a computing device in connection with analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein.
- This disclosure describes techniques that can be performed in connection with extended detection and response to security anomalies in computing networks. Any one of the disclosed techniques, or any group of the disclosed techniques, can optionally be implemented via computing devices that provide automated processing of security -related events in a computing network, such as a network owned by a company, university , or government agency. In general, processing of security-related events can result in information that is presented to a security response team, e.g., a team of human analysts, for further analysis and resolution.
- one or more methods can be performed by a computing device, e.g., a server device coupled to a network.
- the network can comprise, e.g., multiple different domains and multiple different computing assets.
- the different computing assets may be associated with different asset criticality values.
- Example methods can optionally include detecting anomalies in the network.
- anomalies can be detected using third-party anomaly detection systems. Different anomalies may be detected with different confidence values.
- Anomaly detection can optionally be performed by multiple different anomaly detection systems that may be dedicated to different network domains, geographical zones, or computing asset types.
- Detected anomaly data can be enhanced using the anomaly data enhancement techniques described herein.
- Anomaly data enhancement can include receiving security event information comprising at least one attribute associated with an anomaly detected in a network.
- the security event information can be provided as an input to a neural network-based processor.
- the neural network-based processor can identify at least one representative attribute based on the input.
- the representative attribute can be determined by the neural network-based processor to represent the anomaly for security analyses of instances of the anomaly.
- a template comprising the representative attribute may be generated, and the template can be deployed to a production environment.
- the production environment can be configured to automatically detect the instances of the anomaly in the netw ork, and the production environment can be configured to use the template to define at least one collected attribute that is collected for the security analyses of the instances of the anomaly.
- the anomalies optionally represented by enhanced anomaly data, can be grouped according to a multistage grouping process to generate analyst work units. In an example first stage of the multi-stage grouping process, anomalies can be analyzed, based on threat intelligence information, in order to group the anomalies into multiple different threat occurrence groups.
- Each threat occurrence group can therefore comprise one or more of the anomalies.
- the multiple different threat occurrence groups can be grouped into multiple different analyst work units. Therefore, each analyst work unit can comprise one or more of the threat occurrence groups.
- the analyst work units can optionally be enhanced by analyst work unit data enhancement teclmiques described herein.
- Methods to enhance analyst work units can include receiving an analyst work unit, the analyst work unit comprising one or more threat occurrence groups, and each of the one or more threat occurrence groups comprising one or more detected anomalies detected in a network comprising multiple different computing assets.
- the methods can furthermore comprise identifying, within a data store comprising computing threat information, at least one similar threat that has higher similarity to the analyst work unit than one or more other threats identified in the data store. Identify ing the at least one similar threat can comprise, e.g., performing a nearest neighbor search on the data store.
- the methods can include generating an analyst summary of the analyst work unit. Generating the analyst summary’ can comprise, e.g.. using a neural network -based generator to process the analyst work unit and the at least one similar threat.
- the multiple different analyst work units can be prioritized.
- the multiple different analyst work units can be prioritized based on respective asset criticalify values of respective computing assets affected by respective anomalies included in respective analyst work units.
- the multiple different analyst work units can furthermore be prioritized based on respective confidence values of respective anomalies included in respective analy st work units.
- a prioritized display of the multiple different analyst work units can be provided, e.g., for analy st review.
- One or more analyst interactions can be received via the prioritized display of the multiple different analyst work units, resulting in analyst interaction data.
- Embodiments can store the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
- the techniques described herein may be performed by one or more computing devices comprising one or more processors and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the methods disclosed herein.
- the techniques described herein may also be accomplished using non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, perform the methods carried out by the network controller device.
- Embodiments of this disclosure can address alert fatigue while also increasing the efficiency and effectiveness of analysts tasked with analyzing and responding to security events in a network.
- a group of complimentary systems and techniques is disclosed, hi some embodiments, the disclosed techniques can optionally be applied together to provide analysts with high-quality, synthesized information that allows them to quickly understand, research, and act in response to security threats from multiple different telemetry sources.
- any one of the disclosed techniques, or any sub-group of the disclosed techniques can optionally be provided in a freestanding approach that need not necessarily also include other techniques disclosed herein.
- the techniques according to this disclosure fall into three categories: First, given anomalies detected in a network, optionally via multiple different telemetry sources, techniques disclosed herein can include anomaly data enhancement configmed to discover representative attributes of the anomalies.
- the representative attributes can include attributes determined to be useful in anomaly analysis and resolution.
- Second, multi-stage grouping and prioritization methods can be performed to generate analyst work units that can be presented to analysts in a prioritized manner, such as a list of analyst work units arranged in descending priority. Active learning can be applied to security team interactions with the analyst work units in order to adjust and customize the multi-stage grouping and prioritization methods over time.
- techniques disclosed herein can include analyst work unit data enhancement which can generate analyst summaries of analyst work units.
- the analyst summaries can support faster analysis and response times to threats associated with the analyst work units.
- XDR extended detection and response
- IPS intrusion prevention systems
- XDR as applied herein can thus collect and process security -related anomalies from more than one ty pe of telemetry’ source, thereby extending endpoint by considering, e.g., network and email and/or other telemetry’ sourccs/modalitics.
- Collected security events can be combined into a unified feed, providing analysts with a comprehensive overview of anomalies in the monitored environment, e.g., in the network.
- Example anomaly data enhancement techniques disclosed herein can utilize a neural network-based processor, such as a natural language processor (NLP) or a large language model (LLM).
- NLP natural language processor
- LLM large language model
- Example network-based processors include the generative pre-training transformer, version three (GPT3), the generative pre-training transformer, version four (GPT4). and others.
- example anomaly data enhancement techniques can process security events / anomalies independently.
- the neural network -based processor can process a security event and its description and can incorporate one or more representative attributes into an event description.
- the result can be encoded into a generated template with the representative attributes selected by the neural network -based processor.
- example anomaly data enhancement techniques can repeat operations across multiple event / anomaly samples. Resulting templates and chosen representative attributes can be compared. If the templates and representative attributes satisfy a predefined consistency check, then a selected consistent template and its representative attributes can be associated with the security event / anomaly. The selected template and representative attributes can be deployed into a production environment.
- a new template can be generated each time a new event type / anomaly is introduced. Since the underlying cybersecurity engines may change over time, repetitive consistency checks can be performed.
- events / anomalies produced by cybersecurity engines may change: first, the structure or the schema of an event’s attributes may be upgraded, e.g.. by adding new attributes or deprecating previously available attributes. Second, the engines that produce the event may change either by external factors that influence the statistical properties of events (e.g. by emergence of new malware strains) or by internal changes in the engines such as may be caused by bug fixes, parameter tuning or other kinds of maintenance.
- Example methods can be adapted to detect these changes, followed by the regeneration of templates.
- Embodiments can be configured to detect changes in any of several ways.
- a change of event / anomaly attribute schemas may be incompatible with previously generated templates, e.g., if an attribute has been renamed. These changes may be detected by storing historical versions of schemas and comparing them to new event / anomaly schemas.
- Second, internal engine changes may be detected by periodic regeneration of templates and comparing them to previous versions. If a previous template is sufficiently different from an updated template, then a revision of the template can be prompted.
- the disclosed anomaly data enhancement techniques need not require event attribute normalization, i.e., an event schema need not have to adhere to a predefined structure. Furthermore, the disclosed anomaly data enhancement techniques can leverage common cyber security domain knowledge in order to generate event summaries.
- the disclosed anomaly data enhancement techniques can generate templates in a natural language, as opposed to a structured list of attributes.
- further techniques can comprise multi-stage grouping of anomalies from different security products and telemetry sources, as well as prioritization and presentation of resulting groups to the analyst.
- Example techniques can employ
- Multi-stage grouping and prioritization methods can take into account available threat intelligence to measure the potential damage caused by a given threat, together with the confidence of the underlying detection engine. For example, multi-stage grouping and prioritization methods can consider a network's asset inventory information and can measure the relative business value of assets in the netw ork environment, the roles of different asset types (e.g., servers, laptops, phones, medical devices, etc.), and the potential impact of asset compromise.
- asset types e.g., servers, laptops, phones, medical devices, etc.
- multi-stage grouping and prioritization methods can consider additional aspects like geography, threat type, or remedy action type for smart grouping. Considering a set of selectable dimensions (e.g.. threat severity. threat occurrence confidence, asset value), the multi-stage grouping and prioritization methods can proceed with clustering detected anomalies in a multi-dimensional space, thereby generating fused analyst work units. Such clustering can optionally reduce the number of alerts by 90% or more, and techniques can optionally be configurable to target a desired degree of alert reduction specified by an analyst or security response team.
- An analyst work unit generated according to the disclosed multi-stage grouping and prioritization methods can represent an intuitive unit of work for the security analyst to investigate and remediate.
- Analyst work units can optionally be presented to analysts as a single prioritized list, which can be prioritized according to analyst-selected dimensions, and which can present analyst work units in, e.g., descending order according to their associated priority, so that the analyst or security response team can conduct timely and adequate incident responses.
- security analysts can provide feedback for analyst work units, which can be used in an active learning loop. Based on the analysts’ feedback, the multi-stage grouping and prioritization methods disclosed herein can adapt to accommodate, e.g.. netw ork-specific sensitivity to particular threats, or to the relative values of different assets. Multi-stage grouping and prioritization methods can thereby adapt to track a set of evolving network security policies expressing the acceptable security risk levels of a particular netw ork.
- the analyst feedback can be applied to customize both multi-stage grouping as well as prioritization, so that over time anomalies can be grouped in more granular or less granular groupings, and resulting analyst work units can be prioritized differently, based on analyst feedback and preferences.
- Security policies capturing learned rules can optionally be inferred automatically.
- adaptive learning can allows multi-stage grouping and prioritization methods to learn default settings for various industries, so that new customers can be provided with industry -specific security settings that may require a lower degree of analyst feedback and adjustments.
- Example multi-stage grouping and prioritization methods can use any of a variety of different variables to group anomalies. Furthermore, any of the variety of different variables can be used to prioritize analy st work units.
- Example variables that can be used for grouping and/or prioritization include, without limitation, threat severity, confidence of threat detection, threat type, asset value, remedy action type, and geographic location.
- grouping based on one or more of the variables can be mandatory, while grouping based on one or more other variables can be optional.
- grouping based on threat severity and confidence of threat detection can be mandatory, while grouping based on threat type, asset value, remedy action type, and geographic location can be optional.
- Optional variables used for grouping can be, e.g., selected and deselected by a security team.
- optional fields such as asset value, geography or even remedy action type can be inferred from the provided mandatory fields if not directly provided.
- Example grouping methods include any unsupervised clustering or community selection method known in the art or as may be developed.
- Example clustering methods include, e.g., modularity clustering methods and/or spectral clustering methods.
- Further example grouping methods can be based on existing industry standard definitions, when available, such as by using MITRE types for grouping based on threat severity, using predefined external data sources such as StealthWatch Host Groups for grouping by threat type, and/or using asset management process (AMP) groups to imply asset values, in order to group by asset value.
- MITRE types for grouping based on threat severity
- predefined external data sources such as StealthWatch Host Groups for grouping by threat type
- AMP asset management process
- user feedback embedding processes can be deployed to capture user changes in groupings output by the multi-stage grouping process.
- Analyst interaction data can be utilized for future grouping operations.
- grouping based on analyst interaction data can override external definitions and can influence results of future clustering runs.
- reinforcement learning from human feedback (RLHF) technology can be applied to capture and use analyst interaction data for modification of the multi-stage grouping process.
- Multi-stage grouping and prioritization processes can be adapted to minimize analyst incident response efforts by adaptive prioritization, adaptive grouping, explanation, and prioritization of generated analyst work units depending on, e.g., a type of threat, a number of affected network assets, a current state of a network, and/or known or estimated asset values of affected assets. Multi-stage grouping and prioritization processes can thereby reduce
- analyst work unit data enhancement can apply analyst work unit data enhancement techniques to generate analyst summaries based on analyst work units, e.g., based on the analyst work rmits that are output from a multi-stage grouping process such as described above.
- Example processes can use analyst work units as inputs and can generate textual summaries of the analyst work units, assess risks associated with the analyst work units, and propose analyst response actions for responding to the analyst work units.
- the analyst work unit data enhancement techniques can assist analysts to determine a proper risk/priority of an analyst work unit and suggest the next steps, thereby speeding up analyst response times.
- Analyst work unit data enhancement techniques can be adapted to use a threat intelligence data store, which can be internal, e.g., owned and operated internally by a company or other organization, or external, e.g., owned and operated by a third part ⁇ '.
- a threat intelligence data store which can be internal, e.g., owned and operated internally by a company or other organization, or external, e.g., owned and operated by a third part ⁇ '.
- generating analyst summaries of the analyst work unit can be performed by a server coupled to a local area network, and wherein the local area netw ork can further comprise, or be coupled to, the internal threat intelligence data store.
- an external threat intelligence data store When an external threat intelligence data store is used, generating analyst summaries of the analyst work unit can be performed by the server coupled to a local area network, however, the server may connect to an external network (other than the local area network) which comprises the external threat intelligence data store.
- Example threat intelligence data stores are the TALOS and MITRE threat intelligence data stores, which include threat / malware taxonomies and optionally further include response playbooks for responding to threats.
- Analyst work unit data enhancement techniques can be configured to use an analyst work unit, or one or more associated anomalies, as an input, and can perform a nearest neighbor search of the threat intelligence data store to find similar known threat / malware families, e.g., finding a most similar known threat / malware family. Similarity can be according to any desired comparison information, e.g., threat assessment information, security event / anomaly information, etc.
- analyst work unit data enhancement techniques herein can be configured to generate inputs for a neural network-based processor, e.g.. an attention-based or other NLP or LLM neural network, to thereby instruct the neural network -based processor to create an analyst summary of the input analyst work unit based on threat intelligence information associated with the one or more similar known threats.
- the analyst summary can include, inter alia, a measurement of potential risk of the input analyst work unit based on the one or more similar known threats.
- the analyst summary can furthermore include suggested next actions according to existing playbooks or records of former investigations performed in connection with the one or more similar known threats.
- Example analyst summaries output by analyst work unit data enhancement techniques herein can include a high-level overview of the threat posed by the analyst work unit. For example, "This alert resembles a malware dropper. After infecting a device such malware downloads and installs additional modules. Potential risk ranges from malicious advertisement to exfiltration and/or data destruction.
- Another example high-level overview may comprise, ‘‘This alert resembles a new strain of qakbot. Qakbot is known for?”
- Another example high-level overview may comprise, “According to the techniques used, the alert resembles the work of the known advanced persistent threat (APT) group [insert group name]. Based on their previous breaches, we suggest...”
- APT advanced persistent threat
- analyst work unit data enhancement techniques herein can leverage the capabilities of LLMs such as large language model meta-AI (LLaMA), GPT-4. and successors to simplify analyst summary generation.
- LLMs such as large language model meta-AI (LLaMA), GPT-4. and successors to simplify analyst summary generation.
- An LLM based processor can use existing threat intelligence to fine-tune information associated with a similar threat.
- the LLM based processor can furthermore use the similar threat to create the analyst summary of a given analyst work unit as well as suggested response actions.
- the LLM based processor can furthermore assess the potential risk associated with an analyst work unit based on similar threat, e.g., based on similar malware families and/or known breaches.
- an input generator that is adapted to generate an input for an LLM based processor may generate the following natural language prompt:
- Event 1 text You are sending traffic to the Tor network
- Event 2 data "commandLineArguments”: "sudo spctl -master-disable"
- the LLM based processor may generate the following example analyst summary output:
- a high-risk security incident was detected on the network, involving the use of Tor for anonymizing traffic and a command to disable security tools. Immediate action is required to mitigate the potential threat.
- Tor Network Utilization The security system detected traffic being sent to the Tor network, which is known for providing anonymity to users. The use of Tor may indicate attempts to hide malicious activities or exfiltrate sensitive data from the organization.
- Analyst work unit data enhancement techniques herein can be adapted to automatically generate analyst summaries, which can significantly simplify the investigation process, lower the required expertise of the cybersecurity analyst, and reduce time to react.
- the neural network-based processor can be employed within a controlled environment, e.g., a secure server that does not have a connection to public netw orks.
- Analyst work unit data enhancement techniques herein can solve a problem of evolving underlying security products, because the tcclmiqucs will organically update along with updates to threat intelligence data stores and neural network -based processors such as LLM and NLP based models.
- FIG. 1 illustrates an example overview of techniques according to this disclosure, including anomaly detection in a netw ork, anomaly data enhancement, multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and adaptive learning based on security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 1 includes an example network 100 which can be monitored by various anomaly detection systems 131 A, 131B, 131C.
- the network 100 can comprise multiple different domains 110, 120, and multiple different computing assets.
- the domain 110 can comprise assets 111 and 112
- tire domain 120 can comprise assets 121 and 122.
- Example domains can include, e g . a file system / storage domain, an email system domain, a security system / firewall domain, and various network /network equipment domains.
- Example assets can include, e.g., servers, laptops, user equipment (UEs), routers, firewalls, internet of things (loT) devices, etc.
- Different computing assets can have different asset criticality values. For example, an asset criticality of a server that stores or processes a large volume of sensitive company data may be much higher that an asset criticality of an employee UE, such as a smartphone, that stores mainly the employee’s personal information. Furthermore, different computing assets can be located at different geographic locations. For example, the network 100 can computing assets in multiple different cities, regions, or countries.
- the anomaly detection systems 131A, 131B, 131C can comprise, e.g., security monitoring systems that are configured to detect threats, security events, and other anomalies via various different telemetry sources, e.g., via at least two different telemetry sources, within the network 100.
- security monitoring systems that are configured to detect threats, security events, and other anomalies via various different telemetry sources, e.g., via at least two different telemetry sources, within the network 100.
- a wide variety of different anomaly detection systems are commercially available, and owners of advanced networks such as the network 100 may employ multiple anomaly detection systems 131A, 131B. 131C to alert their security response teams to potential threats to their network 100.
- the anomaly detection systems 131 A. 13 IB, 131C can output anomaly data, e.g., alerts for further investigation by a security' response team.
- Anomalies output from the anomaly detection systems 131 A, 131B, 131C can be processed according to the techniques described herein, referred to generally as alert fusion 130.
- FIG. 1 illustrates different processing techniques that can be applied, namely, anomaly data enhancement 132.
- the adaptive learning based on security team interactions 136 can be configured to provide feedback 137 to multi-stage grouping 133 and/or to analyst work unit prioritization and presentation 135, in order to adaptively update the multi-stage grouping 133 and/or the analyst work unit prioritization and presentation 135. Furthermore, in some embodiments, the multi-stage grouping 133 and/or the analyst work unit prioritization and presentation 135 can be directly modified by analyst / security team inputs and selections. [0067] Anomaly data enhancement 132 is described further in connection with FIG. 2 and FIG. 3, as well as the flowchart provided in FIG. 7.
- Multi-stage grouping 133 to generate analyst work units, analyst work unit data prioritization and presentation 135, and adaptive learning based on security team interactions 136 are described further in connection with FIG. 4, as well as the flowchart provided in FIG. 8.
- Analyst work unit data enhancement 134 is described further in connection with FIG. 5, as well as the flowchart provided in FIG. 9.
- An example server that can be configured to perform any of the functions illustrated in FIG. 1 is illustrated in FIG. 6. Such a server can optionally be included within the network 100, and anomaly data enhancement 132, multi-stage grouping 133 to generate analyst work units, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 135, and adaptive learning based on security team interactions 136 can likewise be performed within the network 100.
- FIG. 2 illustrates example anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
- the illustrated anomaly data enhancement architecture 200 can be used to implement the anomaly data enhancement 132 introduced in FIG. 1 in some embodiments.
- the anomaly data enhancement architecture 200 includes anomaly data 202, neural network 204, representative attribute(s) template 206, consistency check 208, and production environment 210, and anomaly data 212.
- a training / template generation stage can be initiated by, e.g., detecting a new anomaly type, or a change in a schema of an existing anomaly type.
- the anomaly data 202 can be provided as an input to the neural network 204, and the neural network can generate the representative attribute(s) template 206 based on the anomaly data 202.
- the anomaly data 202 can represent an anomaly having an anomaly type and output by an anomaly detection system 131 A.
- the representative attribute(s) template 206 can comprise attributes determined by the neural network 204 to be useful for analyzing and resolving the anomaly represented by the anomaly data 202, based on threat intelligence information and/or prior anomaly investigations of similar anomalies.
- the template generation process can be repeated according to multiple repetition cycles at the training / template generation stage, using different first instances of the anomaly data 202 as inputs to the neural network 204, and resulting in multiple instances of the representative attribute(s) template 206.
- the multiple instances of the representative attribute(s) template 206 can then be compared at consistency check 208.
- the consistency check 208 can check whether the multiple instances of the representative attribute(s) template 206 satisfy a consistency threshold. For example, using a consistency threshold of 75%, the consistency check 208 can determine whether the multiple instances of the representative attribute(s) template 206 are at least
- consistency thresholds can be between 70% and 99%.
- the training / template generation stage can continue, optionally re-using one or more instances of anomaly data 202, and the repeating cycles can result in training the neural network 204, thereby increasing the consistency of the multiple instances of the representative attribute(s) template 206.
- Additional consistency checks can be performed on resulting instances of the representative attribute(s) template 206, and template generation can be repeated rmtil the consistency check 208 is passed.
- a consistent template from among the instances of the representative attribute(s) template 206 can be provided to the production environment 210.
- the production environment 210 can comprise, e.g., a security system that monitors a network 100 in order to gather information regarding anomalies detected by anomaly detection systems such as 131A, 131B and 131C.
- the production environment 210 can begin using the representative attribute(s) template 206 to collect the representative attribute(s) designated by the representative attribute(s) template 206. For example, for each further/second instance within the network 100 of the anomaly associated with the representative attribute(s) template 206, the production environment 210 can collect the representative attribute(s) designated by the representative attribute(s) template 206.
- the production environment 210 can store or output the resulting anomaly data 212 for use by multiple systems, e.g., for further training of the neural network 204 and/or for further processing by multi-stage grouping 133, as described herein.
- the anomaly data 212 can be provided to the neural network 204 for further training of the neural network 204.
- the production environment 210 can output the anomaly data 212, and the neural network 204 can process the anomaly data 212, in order to further increase consistency of subsequent training / template generation stages.
- FIG. 3 illustrates a detailed example of the anomaly data enhancement introduced in FIG. 2, in accordance with various aspects of the technologies disclosed herein.
- example anomaly data 302 represents an instance of the anomaly data 202 introduced in FIG. 2
- example neural network 304 represents an instance of the neural network 204 introduced in FIG. 2
- example representative attribute(s) template 306 represents an instance of the representative attribute(s) template 206 introduced in FIG. 2
- example anomaly data 312 represents an instance of the anomaly data 212 introduced in FIG. 2.
- the anomaly data 302 comprises the detected security event, “Modified Windows Defender Real- Time Protection Settings.”
- anomaly data 312 that is or has been used to train the neural network 304 includes:
- the neural network 304 can generate the representative attribute(s) template 306 based on the anomaly data 302 and the anomaly data 312.
- the neural network 304 can select representative attributes from the anomaly data 302 and the anomaly data 312 to include in the representative attribute(s) template 306.
- the neural network 304 has identified the registry information from anomaly data 312 as a representative attribute for analy zing and resolving the anomaly associated with anomaly data 302.
- the neural netw ork 304 has configured the representative attribute(s) template 306 to include a notification, “The System Has Detected a Modification In The Windows Defender Real-Time Protection Settings via die “ ⁇ registry ⁇ ” Key.”
- the neural network 304 has configured the representative attribute(s) template 306 to furthermore include the identified representative attributes, “registry”:
- FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 4 includes anomalies 410, multi-stage grouping 420, first stage input(s) 435, second stage input ⁇ s) 445, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 450, prioritization inputs 455, and analyst
- FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein.
- FIG. 4 includes anomalies 410, multi-stage grouping 420, first stage input(s) 435, second stage input ⁇ s) 445, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 450, prioritization inputs 455, and analyst interactions 456.
- the anomalies 410 include example anomalies 411. 412, 413, 414. and 415...
- the anomalies 410 represent anomalies that can be output from the anomaly detection systems 131 A, 131B. 134C introduced in FIG. 1.
- the anomalies 410 can therefore include different anomalies detected via two or more different telemetry sources.
- the anomalies 410 can be received as inputs at the multi-stage grouping 420.
- the multi-stage grouping 420 can implement multi-stage grouping 133, introduced in FIG. 1, in some embodiments.
- the multi-stage grouping 420 comprises first stage 430, second stage 440, and adaptive learning 447.
- the first stage 430 includes example first stage groupings, including threat occurrence group 431, threat occurrence group 432, and threat occurrence group 431...
- the threat occurrence group 431 can include, e.g., a group comprising anomaly 411 and anomaly 412, which resulted from grouping/clustering operations performed by the first stage 430.
- the threat occurrence group 432 can include, e.g., a group comprising anomaly 413, which resulted from grouping/clustering operations performed by the first stage 430.
- the threat occurrence group 433 can include, e.g., a group comprising anomaly 414 and 415, which resulted from grouping/clustering operations performed by the first stage 430.
- the threat occurrence groups 431, 432, 433... can be generated by the first stage 430 based on data associated with the anomalies 410, first stage inputs 435, and/or adaptive learning inputs from adaptive learning 447.
- the first stage inputs 435 can include, e.g., threat intelligence records from a threat intelligence database.
- the first stage 430 can be configured to identify, based on the anomalies 410 and the threat intelligence records, which of the anomalies 410 are related by being associated with a common threat occurrence.
- the first stage 430 can then create a threat occurrence group, e.g., threat occurrence group 431, corresponding to the common threat occurrence, and the first stage 430 can link the related anomalies, e.g., the anomalies 411. 412, to the threat occurrence group 431.
- the second stage 440 includes example second stage groupings, including analyst work unit 441, and analyst work unit 442...
- the analyst work unit 441 can include, e.g., a group comprising threat occurrence group 431 and tirreat occurrence group 432, which resulted from grouping/clustering operations performed by the second stage 440.
- the analyst work unit 442 can include, e.g., a group comprising threat occurrence group 433, which resulted from grouping/clustering operations performed by the second stage 440.
- the analyst work unit 441, 442... can be generated by the second stage 440 based on data associated with the threat occurrence groups 431, 432, 433... , the respective anomalies 410 included in the respective threat occurrence groups 431, 432, 433, the second stage inputs 445, and/or adaptive learning inputs from adaptive learning 447.
- the second stage inputs 445 can include, e.g., an asset inventory comprising asset information for different assets 111, 112, 121, 122 in a network 100.
- the asset inventory can comprise information such as asset type, asset value, asset geographic location, and/or asset criticality.
- the second stage 440 can be configured to identify, based on the threat occurrence groups 431, 432, 433 and the asset inventory, which of the threat occurrence groups 431, 432, 433 are related by being associated with a common group of assets.
- the second stage 440 can then create an analyst work unit, e.g., analyst work unit 441, corresponding to the common group of assets, and the second stage 440 can link the related threat occurrence groups, e.g., the threat occurrence groups 431, 432, to the analyst work unit 441.
- the analyst work unit data enhancement 134 is introduced in FIG. 1.
- the analyst work unit data enhancement 134 can be configured according to FIG. 5.
- the analyst work unit data enhancement 134 can be configmed to generate analyst summaries of analyst work units such as 441, 442 output from the multi-stage grouping 420.
- the resulting analyst summaries 451 (for analyst work unit 442). and 452 (for analyst work unit 442) can be supplied to the analyst work unit data prioritization and presentation 450.
- Analyst work unit data prioritization and presentation 450 can prioritize and present the analyst summaries 451, 452 output from the analyst work unit data enhancement 134.
- the analyst work unit data prioritization and presentation 450 can prioritize and present the analyst work units such as 441, 442 output from the multi-stage grouping 420.
- the analyst summaries 451, 452 or the analyst work units 441, 442 can be prioritized according to prioritization inputs 455.
- the prioritization inputs 455 can include, e.g., asset criticality values of assets affected by any analyst work unit, or confidence values associated with anomalies included in an analyst work unit, or any other data reflecting the level of risk or time urgency associated with an analyst work unit.
- the analyst summaries 451, 452 or the analyst work units 441, 442 can be presented to analysts, e.g., via a user interface (UI), and the analyst work unit data prioritization and presentation 450 can be configured to receive analyst interactions 456 via the UI.
- analyst work unit data prioritization and presentation 450 can be configured to adaptively learn, from analyst interactions 456, additional prioritization data that can be used to prioritize subsequent analyst summaries 451, 452 or the analyst work units 441. 442.
- the analyst work unit data prioritization and presentation 450 can be configured to supply analyst interaction data 460 to adaptive learning 447, and the adaptive learning 447 can generate inputs for use by first stage 430 and/or second stage 440 in connection with grouping operations.
- FIG. 5 illustrates example analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein.
- FIG. 5 includes analyst work unit 441, analyst work unit data enhancement 500, analyst summary 452, and threat intelligence data store 540.
- the analyst work unit 441 can comprise, e.g., the analyst work unit 441 output from the multi-stage grouping 420 as illustrated in FIG. 4.
- the example analyst work unit data enhancement 500 illustrated in FIG. 5 can implement the analyst work rmit data enhancement 134 introduced in FIG.
- the analyst summary 452 can comprise, e.g., the analyst summary 452 illustrated in FIG. 4.
- the threat intelligence data store 540 can comprise, e.g., an internal threat intelligence data store coupled to a same LAN as a server comprising the analyst work unit data enhancement 500, or an external threat intelligence data store accessed by the analyst work mrit data enhancement 500 via a remote connection to an external network other than the LAN which comprises the threat intelligence data store 540.
- the example analyst work unit data enhancement 500 comprises nearest neighbor search 510, neural network input generator 520, and neural network 530.
- an analyst work unit 441 output from, e.g., a multi-stage grouping process such as illustrated in FIG. 4 can be processed by analyst work unit data enhancement 500 to thereby generate the analyst summary 452.
- the analyst work unit data enhancement 500 can employ nearest neighbors search 510 to perform a nearest neighbors search in the threat intelligence data store 540.
- the nearest neighbors search can identify one or more nearest neighbor threats, in the threat intelligence data store 540, that have comparatively higher, or highest similarity to the analyst work mrit 441.
- the result of the nearest neighbors search are labeled as threat 511 in FIG. 5, and also referred to herein as a similar threat 511.
- the analyst work unit data enhancement 500 can use the neural network input generator 520 to generate inputs for the neural network 530.
- the neural network input generator 520 can configure a command 521, analyst work rmit data 522, and threat data 523.
- the command 521 can optionally comprise a natural language command such as, "‘Here are some data from a security alert. Compose a concise story of what happened, as if it were a security risk report.”
- the analyst work unit data 522 can comprise events or attributes from the input analyst work unit 441, such as:
- Event 1 text You are sending traffic to the Tor network Event 2 title: Disabling security tools
- Event 2 data "commandLineArguments”: "sudo spctl -master-disable"
- the threat data 523 can comprise events or attributes from the similar threat 511, such as a risk level (high, medium, or low) associated with the similar threat.
- the neural network 530 can generate the analyst summary 452 in response to the inputs 521, 522, 523 from the neural network input generator 520.
- Example analyst summaries can include, inter alia, a human readable summary comprising a title, summary, details, and/or recommendations, as provided herein.
- an input to the analyst work unit data enhancement 500 can comprise an anomaly such as one of the anomalies 410, or a threat occurrence group such as one of the threat occurrence groups 431 , 432. 433, in which case the generated analyst summary’ 452 output by’ the analyst work unit data enhancement 500 can comprise a summary of the anomaly or threat occurrence group, rather than the analyst summary’ 452 for the analyst work unit 441.
- FIG. 6 illustrates an example computer hardware architecture that can implement a server computer 600. in accordance with various aspects of the technologies disclosed herein.
- the computer architecture shown in FIG. 6 illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein.
- the server computer 600 includes a baseboard 602, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths.
- a baseboard 602 or “motherboard”
- the CPUs 604 can be standard programmable processors that perform arithmetic and logical operations necessary for tire operation of the server computer 600.
- the CPUs 604 perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states.
- Switching elements generally include electronic circuits that maintain one of two binary' states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like.
- the chipset 606 provides an interface between the CPUs 604 and the remainder of the components and devices on the baseboard 602.
- the chipset 606 can provide an interface to a RAM 608, used as the main memory' in the server computer 600.
- the chipset 606 can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”) 610 or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the server computer 600 and to transfer information between the various components and devices.
- ROM 610 or NVRAM can also store other software components necessary for the operation of the server computer 600 in accordance with the configurations described herein.
- the server computer 600 can operate in a networked environment using logical coimections to remote computing devices and computer systems through a network, such as the LAN 624.
- the chipset 606 can include functionality for providing network connectivity through a NIC 612, such as a gigabit Ethernet adapter.
- the NIC 612 is capable of connecting the server computer 600 to other computing devices over the network 624. It should be appreciated that multiple NICs 612 can be present in the server computer 600, comiecting the computer to other types of networks and remote computer systems.
- the server computer 600 can be connected to a storage device 618 that provides non-volatile storage for the server computer 600.
- the storage device 618 can store an operating system 620, programs 622, and data, to implement any of the various components described in detail herein.
- the storage device 618 can be connected to the server computer 600 through a storage controller 614 connected to the chipset 606.
- the storage device 618 can comprise one or more physical storage units.
- the storage controller 614 can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units.
- SAS serial attached SCSI
- SATA serial advanced technology attachment
- FC fiber channel
- the server computer 600 can store data on the storage device 618 by transforming the physical state of the physical storage units to reflect the information being stored.
- the specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device 618 is characterized as primary' or secondary storage, and the like.
- the server computer 600 can store information to the storage device 618 by issuing instructions through the storage controller 614 to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit.
- the server computer 600 can further read information from the storage device 618 by detecting the physical states or characteristics of one or more particular locations within the physical storage units.
- the server computer 600 can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data.
- computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the server computer 600.
- the operations performed by the computing elements illustrated in FIGS. 1-5, and or any components included therein, may be supported by one or more devices similar to server computer 600.
- Computer-readable storage media can include volatile and non- volatile, removable and non-removable media implemented in any method or technology.
- Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically -erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory' technology, compact disc ROM (“CD- ROM”). digital versatile disk (“DVD”), high definition DVD (“HD-DVD”). BLU-RAY. or other optical storage. magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion.
- the storage device 618 can store an operating system 620 utilized to control the operation of the server computer 600.
- the operating system comprises the LINUX operating system.
- the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington.
- the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized.
- the storage device 618 can store other system or application programs and data utilized by the server computer 600.
- the storage device 618 or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the server computer 600, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein.
- These computer-executable instructions transform the server computer 600 by specifying how the CPUs 604 transition between states, as described above.
- the server computer 600 has access to computer -readable storage media storing computer-executable instructions which, when executed by the server computer 600. perform the various processes described with regard to FIGS. 7, 8, and 9.
- the server computer 600 can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein.
- the server computer 600 can also include one or more input/output controllers 61 for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic sty lus, or other type of input device.
- an input/output controller 616 can provide output to a display, such as a computer monitor, a flat-panel display , a digital projector, a printer, or other ty pe of output device.
- die server computer 600 might not include all of the components shown in FIG. 6, can include other components that are not explicitly shown in FIG. 6, or might utilize an architecture completely different than that shown in FIG. 6.
- FIGS. 7, 8, and 9 are flow diagrams of example methods 700, 800, 900 performed at least partly by a computing device, such as the server computer 600.
- the logical operations described herein with respect to FIGS. 7, 8 and 9 may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system.
- the methods 700, 800, and 900 may be performed by a system comprising one or more processors and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause die one or more processors to perform the methods 700, 800, and 900.
- FIG. 7 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
- the server computer 600 can optionally detect a trigger which initiates the training / template generation stage 704.
- the detected trigger can comprise, e.g., a occurrence of a new type of anomaly in a network, or changes in an anomaly schema associated with previously detected anomalies.
- the server computer 600 can perform the training / template generation stage 704, which results in generating a template that can be deployed to a production environment, wherein the production environment can use the template to collect attribute representative attributes for an anomaly associated with the template.
- the training / template generation stage 704 can optionally be repeated multiple times, as described herein.
- the server computer 600 can receive security event information associated with an anomaly.
- the security event information can comprise, e.g.. an identification of an anomaly and at least one attribute associated with the anomaly and detected in a network 100.
- the server computer 600 can identify representative attribute(s) of the anomaly identified at 706.
- the training / template generation stage 704 can provide the security’ event information received at 706 as an input to a neural network -based processor, e.g., the neural network 204 illustrated in FIG. 2, along with an instruction for the neural network -based processor to identify representative attribute(s).
- the neural network-based processor can comprise, e.g., an NLP or a LLM-based processor.
- the neural network-based processor can be configured to identify at least one representative attribute based on the input.
- the at least one representative attribute can be determined by the neural network -based processor to represent the anomaly for security' analyses of instances of the anomaly.
- the at least one representative attribute can include an attribute previously included in the security' event information received at 706, or the at least one representative attribute can include a different attribute, other than the attributes previously included in the security' event information.
- the at least one representative attribute can include multiple representative attributes, optionally including both attributes previously included in the security event information as well as attributes not previously included in the security event information.
- the server computer 600 can generate a template comprising the at least one representative attribute.
- the template can be generated directly by the neural network-based processor.
- the template can be generated separately, and can include the at least one representative attribute output by the neural network-based processor.
- An example template is illustrated herein by the representative attribute(s) template 306 in FIG. 3.
- the server computer 600 can perform a consistency check.
- the consistency check can comprise comparing multiple templates generated via multiple repeated cycles of the training / template generation stage 704, in order to determine consistency of the multiple templates.
- the training / template generation stage 704 can optionally be repeated for each of multiple respective first instances the anomaly identified at 706.
- the receiving at 706. the identifying at 708, and the generating at 710 can be repeated in order to produce multiple templates, and the consistency check at 712 can be repeated, optionally at each cycle, to compare the multiple templates.
- at least one of the multiple repetition cycles can be triggered in response to a change in a detection engine, such as an anomaly detection system 131 A, configured to detect the anomaly in the network 100, wherein the change in the detection engine results in a change in the at least one attribute associated with the anomaly.
- the training / template generation stage 704 can be completed and, at 716, a representative consistency -checked template can be deployed to a production environment. Therefore, deploying the template to the production environment can be performed in response to the consistency of the of the multiple templates satisfying a consistency threshold. However, if the multiple templates do not pass the consistency check, then additional repetitions of the training / template generation stage 704 can be performed, using additional instances of the anomaly identified at 706.
- a template output from the training / template generation stage 704 can be deployed to a production environment configured to automatically detect the instances of the anomaly in the network.
- production environment can be configured to use the template to define at least one collected attribute that is collected for the security analyses of the instances of the anomaly.
- Instances of the anomaly occurring after the template is deployed at 716 may be referred to as second instances of the anomaly, while instances of the anomaly occurring before the template is deployed may be referred to herein as first instances of the anomaly.
- the security event information resulting from a second instance of the anomaly can be fed back from the production environment to operation 706, so that second security event information comprising a collected attribute associated with a second instance of the anomaly can be used during further repetitions of the training / template generation stage 704, to increase / enhance consistency of templates generated by the neural net ork-based processor.
- FIG. 8 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with multi-stage grouping to generate analyst work units, analyst work unit data prioritization and presentation, and security team interactions with die analyst work units, in accordance with various aspects of the technologies disclosed herein.
- the server computer 600 can detect anomalies in a network 100.
- the server computer 600 can receive anomaly data representing anomalies detected, e.g., at anomaly detections systems 131 A, 13 IB, 131C.
- the anomalies can therefore include different anomalies detected via multiple different telemetry sources.
- the network 100 can comprise multiple different domains 110, 120 and multiple different computing assets such as 111, 112, 121, and 122. Different anomalies can be detected with different confidence values.
- different computing assets of the multiple different computing assets 111, 112, 121, and 122 can be associated with different asset criticality values, e.g., as described in connection with FIG. 1.
- the server computer 600 can perform a first stage grouping of a multi-stage grouping process, whereby threat occurrence groups can be generated.
- the server computer 600 can analyze the anomalies received at 602 based on threat intelligence information, in order to group the anomalies into multiple different threat occurrence groups, wherein each threat occurrence group comprises one or more of the anomalies.
- Operation 804 can use any desired grouping approach, e.g., any unsupervised clustering or community selection method, including but not limited to a spectral clustering process or a modularity clustering process.
- the first stage grouping at 804 can use any of a variety of different variables for grouping.
- Anomalies occurring in different domains and in different computing assets can potentially be grouped together in a threat occurrence group.
- at least one first threat occurrence group can comprise two or more different anomalies related to tw o or more different domains.
- At least one second threat occurrence group can comprise two or more different anomalies related to two or more different computing assets.
- the server computer 600 can perform a second stage grouping of the multi-stage grouping process, whereby analyst work units can be generated.
- multiple different threat occurrence groups output from operation 804 can be grouped into multiple different analyst work units, wherein each analyst work unit comprises one or more of the threat occurrence groups.
- Operation 806 can use any desired grouping approach, e.g., grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise applying any unsupervised clustering or community selection method including by not limited to spectral clustering processes and modularity clustering processes.
- the second stage grouping at 806 can use any of a variety of different variables for grouping.
- grouping the multiple different threat occurrence groups into multiple different analyst work units can comprise grouping the multiple different threat occurrence groups according to geographic locations of respective computing assets affected by respective anomalies included in respective analyst work units.
- grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise grouping the multiple different threat occurrence groups according to threat types associated with respective anomalies included in respective analyst work units. [0128] In some embodiments, grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise grouping the multiple different threat occurrence groups according to response types associated with respective anomalies included in respective analyst work units.
- Operation 808 comprises analyst work unit data enhancement. For example, processes described in connection with FIG. 5 can be performed in order to generate analyst summaries of analyst work mrits output from operation 806.
- Operation 810 comprises prioritizing analyst work units.
- Analyst work units, and optionally the analyst summaries generated at 808, can be prioritized based on any of the variables disclosed herein.
- operation 810 can comprise prioritizing multiple different analyst work units based at least in part on respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units.
- operation 810 can comprise prioritizing multiple different analyst work units based at least in part on respective confidence values of respective anomalies included in respective analyst work units.
- Operation 812 comprises displaying analyst work rmits.
- a prioritized display can be provided, the prioritized display comprising the multiple different analyst work units generated at 804, 806, and 808, and prioritized at 810.
- Operation 814 comprises receiving analyst interactions with analyst work mrits. For example, one or more analyst interactions can be received via the prioritized display provided at 812, and the analyst interactions can result in in analyst interaction data.
- Operation 816 comprises storing analyst interaction data, e.g.. storing the analyst interaction data received at 814.
- Operation 818 comprises outputting the analyst interaction data for use in adaptive learning, e.g., for use in adaptive learning applicable to subsequent grouping operations 804, 806 to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
- the analyst interaction data can furthermore be for use in adaptive learning applicable to subsequent prioritizing operations at 810, to facilitate subsequent prioritizing subsequent analyst work units.
- FIG. 9 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein. At operation 902, the server computer 600 can receive an analyst work unit.
- the analyst work unit can comprise an output of a second stage of a multistage grouping process.
- the analyst work unit can comprise one or more threat occurrence groups, and each of the one or more threat occurrence groups can comprise one or more detected anomalies detected in a network 100 comprising multiple different computing assets.
- the server computer 600 can identify, within a data store comprising computing threat information, e.g., within a threat intelligence data store 540 such as illustrated in FIG. 5 which can optionally implement an internal or an external type database, at least one similar threat that has higher similarity to the analyst work unit (received at 902) than one or more other threats identified in the data store.
- identifying the similar threat can comprise performing a nearest neighbor search on the data store, to find a threat in the data store that comprises, e.g., a highest similarity to the analyst work unit.
- the server computer 600 can configure inputs for a neural network-based generator.
- the server computer 600 can configure a natural language command, one or more first events based on the analyst work unit, one or more second events based on the at least one similar threat identified at 904, and/or a risk level based on the at least one similar threat identified at 904. These inputs can be provided to the neural networkbased generator to initiate operation 908.
- the server computer 600 can generate an analyst summary of the analyst work unit based on the analy st work unit and the at least one similar threat.
- Generating the analy st summary 7 can comprise using a neural network -based generator to process inputs generated at 906, e.g., natural language command, one or more first events based on the analyst work unit, one or more second events based on the at least one similar threat identified at 904, and/or a risk level based on the at least one similar threat identified at 904.
- the neural networkbased generator can be configured to use at least one of NLP or a LLM.
- generating the analyst summary at 908 can comprise generating, based on the threat response playbook information associated with the similar threat, a next action recommendation associated with the analyst work unit.
- generating the analyst summary at 908 can comprise providing the risk level to the neural network-based generator so that die network-based generator analyst summary can incorporate the risk level in the analyst summary.
- the server computer 600 can output the analyst summary 7 generated at 908.
- the analyst summaiy can comprise, e.g., one or more different sections corresponding to the one or more first events based on the analyst work unit, and other information as described herein.
- techniques described herein for extended detection and response to security anomalies in computing networks can perform automated analysis of anomalies occurring in different telemetry sources in a computer network, in order to synthesize the anomalies into analyst work units that are surfaced for further analysis by security response teams.
- Anomalies can initially be processed in order to identify and collect extended anomaly data.
- the extended anomaly data can then be used to group the anomalies according to a multi-stage grouping process which produces analyst work units.
- the analyst work units can be processed to produce analyst summaries that assist with analysis and response.
- the analyst work units can be prioritized for further analysis, and analyst interactions with the prioritized analyst work units can be used to influence subsequent anomaly grouping operations.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
Techniques described herein for extended detection and response to security anomalies in computing networks can perform automated analysis of anomalies occurring in different telemetry sources in a computer network, in order to synthesize the anomalies into analyst work units that are surfaced for further analysis by security response teams. Anomalies can initially be processed in order to identify and collect extended anomaly data. The extended anomaly data can then be used to group the anomalies according to a multi-stage grouping process which produces analyst work units. The analyst work units can be processed to produce analyst summaries that assist with analysis and response. Furthermore, the analyst work units can be prioritized for further analysis, and analyst interactions with the prioritized analyst work units can be used to influence subsequent anomaly grouping operations.
Description
A ERT FUSION FOR EXTENDED DETECTION AND RESPONSE TO SECURITY ANOMALIES
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Application No. 18/231,816 filed Aug. 9, 2023, entitled "ALERT FUSION FOR EXTENDED DETECTION AND RESPONSE TO SECURITY ANOMALIES,” which claims the benefit of U.S. Provisional Application No. 63/461,374 filed Apr. 24, 2023, entitled “EVENT DESCRIPTIONS AND ALERTS FOR EXTENDED DETECTION AND RESPONSE (XDR) SYSTEMS.” The prior applications are hereby incorporated herein by reference in their entirety.
TECHNICAL FIELD
[0002] The present disclosure relates generally to computer and network security, and to threat detection, analysis, and alerts in particular.
BACKGROUND
[0003] Security7 analytics products struggle with the trade-off between the number of alerts that can be generated and the capacity of security response teams to process them. The problem is most acute at larger enterprises with multiple security products which can generate thousands of alerts daily, thus overwhelming the security response team's capacity' to act. Simple countermeasures like filtering or suppressing alerts based on existing policies or customer feedback do not solve the problem well, as they may lead to missed important alerts and elevated security risks. A more universal solution is needed to surface relevant security alerts without overwhelming the security response teams.
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] The detailed description is set forth below with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical items. The systems depicted in the accompanying figures are not to scale and components within the figures may be depicted not to scale with each other.
[0005] FIG. 1 illustrates an example overview of techniques according to this disclosure, including anomaly detection in a netw ork, anomaly data enhancement, multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein.
[0006] FIG. 2 illustrates example anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
[0007] FIG. 3 illustrates a detailed example of the anomaly data enhancement introduced in FIG. 2, in accordance w ith various aspects of the technologies disclosed herein.
[0008] FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein.
[0009] FIG. 5 illustrates example analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein.
[0010] FIG. 6 illustrates an example computer hardware architecture that can implement the teclmiques disclosed herein, in accordance with various aspects of the technologies disclosed herein.
[0011] FIG. 7 is a flow diagram that illustrates an example method performed by a computing device in connection with anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein. [0012] FIG. 8 is a flow diagram that illustrates an example method performed by a computing device in connection with multi-stage grouping to generate analyst work units, analyst work unit data prioritization and presentation, and security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein.
[0013] FIG. 9 is a flow diagram that illustrates an example method performed by a computing device in connection with analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein.
DESCRIPTION OF EXAMPLE EMBODIMENTS
OVERVIEW
[0014] Aspects of the invention are set out in the independent claims and preferred features are set out in the dependent claims. Features of one aspect may be applied to each aspect alone or in combination with other features. [0015] This disclosure describes techniques that can be performed in connection with extended detection and response to security anomalies in computing networks. Any one of the disclosed techniques, or any group of the disclosed techniques, can optionally be implemented via computing devices that provide automated processing of security -related events in a computing network, such as a network owned by a company, university , or government agency. In general, processing of security-related events can result in information that is presented to a security response team, e.g., a team of human analysts, for further analysis and resolution.
[0016] According to example embodiments, one or more methods can be performed by a computing device, e.g., a server device coupled to a network. The network can comprise, e.g., multiple different domains and multiple different computing assets. The different computing assets may be associated with different asset criticality values.
[0017] Example methods can optionally include detecting anomalies in the network. Alternatively, anomalies can be detected using third-party anomaly detection systems. Different anomalies may be detected with different confidence values. Anomaly detection can optionally be performed by multiple different anomaly detection systems that may be dedicated to different network domains, geographical zones, or computing asset types.
[0018] Detected anomaly data can be enhanced using the anomaly data enhancement techniques described herein. Anomaly data enhancement can include receiving security event information comprising at least one attribute associated with an anomaly detected in a network. The security event information can be provided as an input to a neural network-based processor. The neural network-based processor can identify at least one representative attribute based on the input.
[0019] The representative attribute can be determined by the neural network-based processor to represent the anomaly for security analyses of instances of the anomaly. A template comprising the representative attribute may be generated, and the template can be deployed to a production environment. The production environment can be configured to automatically detect the instances of the anomaly in the netw ork, and the production environment can be configured to use the template to define at least one collected attribute that is collected for the security analyses of the instances of the anomaly.
[0020] The anomalies, optionally represented by enhanced anomaly data, can be grouped according to a multistage grouping process to generate analyst work units. In an example first stage of the multi-stage grouping process, anomalies can be analyzed, based on threat intelligence information, in order to group the anomalies into multiple different threat occurrence groups. Each threat occurrence group can therefore comprise one or more of the anomalies. [0021] In an example second stage of the multi-stage grouping process, the multiple different threat occurrence groups can be grouped into multiple different analyst work units. Therefore, each analyst work unit can comprise one or more of the threat occurrence groups.
[0022] The analyst work units can optionally be enhanced by analyst work unit data enhancement teclmiques described herein. Methods to enhance analyst work units can include receiving an analyst work unit, the analyst work unit comprising one or more threat occurrence groups, and each of the one or more threat occurrence groups comprising one or more detected anomalies detected in a network comprising multiple different computing assets. The methods can furthermore comprise identifying, within a data store comprising computing threat information, at least one similar threat that has higher similarity to the analyst work unit than one or more other threats identified in the data store. Identify ing the at least one similar threat can comprise, e.g., performing a nearest neighbor search on the data store. Finally, the methods can include generating an analyst summary of the analyst work unit. Generating the analyst summary’ can comprise, e.g.. using a neural network -based generator to process the analyst work unit and the at least one similar threat.
[0023] The multiple different analyst work units, optionally enhanced according to the techniques described herein, can be prioritized. For example, the multiple different analyst work units can be prioritized based on respective asset criticalify values of respective computing assets affected by respective anomalies included in respective analyst work units. The multiple different analyst work units can furthermore be prioritized based on respective confidence values of respective anomalies included in respective analy st work units.
[0024] A prioritized display of the multiple different analyst work units can be provided, e.g., for analy st review.
One or more analyst interactions can be received via the prioritized display of the multiple different analyst work units, resulting in analyst interaction data. Embodiments can store the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
[0025] The techniques described herein may be performed by one or more computing devices comprising one or more processors and one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform the methods disclosed herein. The techniques described herein may also be accomplished using non-transitory computer-readable media storing computer-executable instructions that, when executed by one or more processors, perform the methods carried out by the network controller device.
EXAMPLE EMBODIMENTS
[0026] One problem in modern cybersecurity is alert fatigue. Even senior analysts are often overwhelmed by the number of alerts and incidents they must handle. The situation is even worse in cross-product scenarios. A single human analyst cannot possibility know and understand a large number of different signals from many different security products and telemetry’ sources.
[0027] Embodiments of this disclosure can address alert fatigue while also increasing the efficiency and effectiveness of analysts tasked with analyzing and responding to security events in a network. A group of
complimentary systems and techniques is disclosed, hi some embodiments, the disclosed techniques can optionally be applied together to provide analysts with high-quality, synthesized information that allows them to quickly understand, research, and act in response to security threats from multiple different telemetry sources. Alternatively, any one of the disclosed techniques, or any sub-group of the disclosed techniques, can optionally be provided in a freestanding approach that need not necessarily also include other techniques disclosed herein.
[0028] The techniques according to this disclosure fall into three categories: First, given anomalies detected in a network, optionally via multiple different telemetry sources, techniques disclosed herein can include anomaly data enhancement configmed to discover representative attributes of the anomalies. The representative attributes can include attributes determined to be useful in anomaly analysis and resolution. [0029] Second, multi-stage grouping and prioritization methods can be performed to generate analyst work units that can be presented to analysts in a prioritized manner, such as a list of analyst work units arranged in descending priority. Active learning can be applied to security team interactions with the analyst work units in order to adjust and customize the multi-stage grouping and prioritization methods over time.
[0030] Third, techniques disclosed herein can include analyst work unit data enhancement which can generate analyst summaries of analyst work units. The analyst summaries can support faster analysis and response times to threats associated with the analyst work units.
[0031] With regard to the anomaly data enhancement techniques disclosed herein, In the domain of cybersecurity, extended detection and response (XDR) and cross-domain detection systems collect and process security -related anomalies from various types of products, such as intrusion detection systems (IDS) and intrusion prevention systems (IPS). XDR as applied herein can thus collect and process security -related anomalies from more than one ty pe of telemetry’ source, thereby extending endpoint by considering, e.g., network and email and/or other telemetry’ sourccs/modalitics. Collected security events can be combined into a unified feed, providing analysts with a comprehensive overview of anomalies in the monitored environment, e.g., in the network.
[0032] However, due to dilferences in format, level of detail, and naming conventions used by different security’ products and in connection with different telemetry sources, cybersecurity analysts face significant challenges in efficiently understanding the generated anomalies. This issue persists even in ideal setups, where the cross-domain product effectively filters and presents only the most critical events.
[0033] Current security engines can produce hundreds to thousands of unique security' event types, each with a large number of unique attributes. When investigating anomalies, analysts must read the relevant event descriptions and search for the subset of attributes that serve as convicting evidence. This highly time-consuming process requires a high level of expertise and becomes unmanageable as more engines and their unique event types are integrated into a cross-domain system. Consequently, there is a need to streamline the information presented to analysts, prioritize the most relevant event details, and facilitate a more efficient investigation process.
[0034] Example anomaly data enhancement techniques disclosed herein can utilize a neural network-based processor, such as a natural language processor (NLP) or a large language model (LLM). Example network-based processors include the generative pre-training transformer, version three (GPT3), the generative pre-training transformer, version four (GPT4). and others.
[0035] In general, example anomaly data enhancement techniques can process security events / anomalies independently. The neural network -based processor can process a security event and its description and can
incorporate one or more representative attributes into an event description. The result can be encoded into a generated template with the representative attributes selected by the neural network -based processor.
[0036] To preserve consistency and minimize the risk of incorrect suggestions, example anomaly data enhancement techniques can repeat operations across multiple event / anomaly samples. Resulting templates and chosen representative attributes can be compared. If the templates and representative attributes satisfy a predefined consistency check, then a selected consistent template and its representative attributes can be associated with the security event / anomaly. The selected template and representative attributes can be deployed into a production environment.
[0037] In some embodiments, a new template can be generated each time a new event type / anomaly is introduced. Since the underlying cybersecurity engines may change over time, repetitive consistency checks can be performed.
There are two different ways that events / anomalies produced by cybersecurity engines may change: first, the structure or the schema of an event’s attributes may be upgraded, e.g.. by adding new attributes or deprecating previously available attributes. Second, the engines that produce the event may change either by external factors that influence the statistical properties of events (e.g. by emergence of new malware strains) or by internal changes in the engines such as may be caused by bug fixes, parameter tuning or other kinds of maintenance.
[0038] Example methods can be adapted to detect these changes, followed by the regeneration of templates. Embodiments can be configured to detect changes in any of several ways. First, a change of event / anomaly attribute schemas may be incompatible with previously generated templates, e.g., if an attribute has been renamed. These changes may be detected by storing historical versions of schemas and comparing them to new event / anomaly schemas. Second, internal engine changes may be detected by periodic regeneration of templates and comparing them to previous versions. If a previous template is sufficiently different from an updated template, then a revision of the template can be prompted.
[0039] The disclosed anomaly data enhancement techniques need not require event attribute normalization, i.e., an event schema need not have to adhere to a predefined structure. Furthermore, the disclosed anomaly data enhancement techniques can leverage common cyber security domain knowledge in order to generate event summaries.
The disclosed anomaly data enhancement techniques can generate templates in a natural language, as opposed to a structured list of attributes.
[0040] With regard to the multi-stage grouping and prioritization disclosed herein, further techniques according to this disclosure can comprise multi-stage grouping of anomalies from different security products and telemetry sources, as well as prioritization and presentation of resulting groups to the analyst. Example techniques can employ
Artificial Intelligence (Al) and active learning to synthesize an actionable list of “analyst work units” for the security response teams of enterprises regardless of their size or complexity of their information technology infrastructure.
[0041] Multi-stage grouping and prioritization methods can take into account available threat intelligence to measure the potential damage caused by a given threat, together with the confidence of the underlying detection engine. For example, multi-stage grouping and prioritization methods can consider a network's asset inventory information and can measure the relative business value of assets in the netw ork environment, the roles of different asset types (e.g., servers, laptops, phones, medical devices, etc.), and the potential impact of asset compromise.
[0042] Optionally, multi-stage grouping and prioritization methods can consider additional aspects like geography, threat type, or remedy action type for smart grouping. Considering a set of selectable dimensions (e.g.. threat severity.
threat occurrence confidence, asset value), the multi-stage grouping and prioritization methods can proceed with clustering detected anomalies in a multi-dimensional space, thereby generating fused analyst work units. Such clustering can optionally reduce the number of alerts by 90% or more, and techniques can optionally be configurable to target a desired degree of alert reduction specified by an analyst or security response team. [0043] An analyst work unit generated according to the disclosed multi-stage grouping and prioritization methods can represent an intuitive unit of work for the security analyst to investigate and remediate. Analyst work units can optionally be presented to analysts as a single prioritized list, which can be prioritized according to analyst-selected dimensions, and which can present analyst work units in, e.g., descending order according to their associated priority, so that the analyst or security response team can conduct timely and adequate incident responses. [0044] Furthermore, security analysts can provide feedback for analyst work units, which can be used in an active learning loop. Based on the analysts’ feedback, the multi-stage grouping and prioritization methods disclosed herein can adapt to accommodate, e.g.. netw ork-specific sensitivity to particular threats, or to the relative values of different assets. Multi-stage grouping and prioritization methods can thereby adapt to track a set of evolving network security policies expressing the acceptable security risk levels of a particular netw ork. The analyst feedback can be applied to customize both multi-stage grouping as well as prioritization, so that over time anomalies can be grouped in more granular or less granular groupings, and resulting analyst work units can be prioritized differently, based on analyst feedback and preferences.
[0045] Security policies capturing learned rules can optionally be inferred automatically. Furthermore, in some embodiments, adaptive learning can allows multi-stage grouping and prioritization methods to learn default settings for various industries, so that new customers can be provided with industry -specific security settings that may require a lower degree of analyst feedback and adjustments.
[0046] Example multi-stage grouping and prioritization methods can use any of a variety of different variables to group anomalies. Furthermore, any of the variety of different variables can be used to prioritize analy st work units. Example variables that can be used for grouping and/or prioritization include, without limitation, threat severity, confidence of threat detection, threat type, asset value, remedy action type, and geographic location. In some embodiments, grouping based on one or more of the variables can be mandatory, while grouping based on one or more other variables can be optional. For example, some embodiments, grouping based on threat severity and confidence of threat detection can be mandatory, while grouping based on threat type, asset value, remedy action type, and geographic location can be optional. Optional variables used for grouping can be, e.g., selected and deselected by a security team. Furthermore, in some embodiments, optional fields such as asset value, geography or even remedy action type can be inferred from the provided mandatory fields if not directly provided.
[0047] Any desired grouping methods can be used for the different stages of a multi-stage grouping process. Example grouping methods include any unsupervised clustering or community selection method known in the art or as may be developed. Example clustering methods include, e.g., modularity clustering methods and/or spectral clustering methods. Further example grouping methods can be based on existing industry standard definitions, when available, such as by using MITRE types for grouping based on threat severity, using predefined external data sources such as StealthWatch Host Groups for grouping by threat type, and/or using asset management process (AMP) groups to imply asset values, in order to group by asset value.
[0048] In order to capture and use analyst interaction data for modification of the multi-stage grouping process, user feedback embedding processes can be deployed to capture user changes in groupings output by the multi-stage grouping process. Analyst interaction data can be utilized for future grouping operations. In some embodiments, grouping based on analyst interaction data can override external definitions and can influence results of future clustering runs. In some embodiments, reinforcement learning from human feedback (RLHF) technology can be applied to capture and use analyst interaction data for modification of the multi-stage grouping process.
[0049] Multi-stage grouping and prioritization processes can be adapted to minimize analyst incident response efforts by adaptive prioritization, adaptive grouping, explanation, and prioritization of generated analyst work units depending on, e.g., a type of threat, a number of affected network assets, a current state of a network, and/or known or estimated asset values of affected assets. Multi-stage grouping and prioritization processes can thereby reduce
“alert fatigue” without sacrificing anomaly information and response effectiveness. Any number of detected anomalies can be condensed and prioritized into analyst work units representing meaningful threats without overwhelming a security response team’s incident response capacity.
[0050] With regard to the analyst work unit data enhancement disclosed herein, further techniques according to this disclosure can apply analyst work unit data enhancement techniques to generate analyst summaries based on analyst work units, e.g., based on the analyst work rmits that are output from a multi-stage grouping process such as described above. Example processes can use analyst work units as inputs and can generate textual summaries of the analyst work units, assess risks associated with the analyst work units, and propose analyst response actions for responding to the analyst work units. The analyst work unit data enhancement techniques can assist analysts to determine a proper risk/priority of an analyst work unit and suggest the next steps, thereby speeding up analyst response times.
[0051] Analyst work unit data enhancement techniques according to this disclosure can be adapted to use a threat intelligence data store, which can be internal, e.g., owned and operated internally by a company or other organization, or external, e.g., owned and operated by a third part}'. When an internal threat intelligence data store is used, for example, generating analyst summaries of the analyst work unit can be performed by a server coupled to a local area network, and wherein the local area netw ork can further comprise, or be coupled to, the internal threat intelligence data store. When an external threat intelligence data store is used, generating analyst summaries of the analyst work unit can be performed by the server coupled to a local area network, however, the server may connect to an external network (other than the local area network) which comprises the external threat intelligence data store. Example threat intelligence data stores are the TALOS and MITRE threat intelligence data stores, which include threat / malware taxonomies and optionally further include response playbooks for responding to threats.
[0052] Analyst work unit data enhancement techniques can be configured to use an analyst work unit, or one or more associated anomalies, as an input, and can perform a nearest neighbor search of the threat intelligence data store to find similar known threat / malware families, e.g., finding a most similar known threat / malware family. Similarity can be according to any desired comparison information, e.g., threat assessment information, security event / anomaly information, etc.
[0053] After identifying one or more similar known threats, analyst work unit data enhancement techniques herein can be configured to generate inputs for a neural network-based processor, e.g.. an attention-based or other NLP or LLM neural network, to thereby instruct the neural network -based processor to create an analyst summary of the input
analyst work unit based on threat intelligence information associated with the one or more similar known threats. The analyst summary can include, inter alia, a measurement of potential risk of the input analyst work unit based on the one or more similar known threats. The analyst summary can furthermore include suggested next actions according to existing playbooks or records of former investigations performed in connection with the one or more similar known threats.
[0054] Example analyst summaries output by analyst work unit data enhancement techniques herein can include a high-level overview of the threat posed by the analyst work unit. For example, "This alert resembles a malware dropper. After infecting a device such malware downloads and installs additional modules. Potential risk ranges from malicious advertisement to exfiltration and/or data destruction. We suggest ...” Another example high-level overview may comprise, ‘‘This alert resembles a new strain of qakbot. Qakbot is known for...” Another example high-level overview may comprise, “According to the techniques used, the alert resembles the work of the known advanced persistent threat (APT) group [insert group name]. Based on their previous breaches, we suggest...”
[0055] In some embodiments, analyst work unit data enhancement techniques herein can leverage the capabilities of LLMs such as large language model meta-AI (LLaMA), GPT-4. and successors to simplify analyst summary generation. An LLM based processor can use existing threat intelligence to fine-tune information associated with a similar threat. The LLM based processor can furthermore use the similar threat to create the analyst summary of a given analyst work unit as well as suggested response actions. The LLM based processor can furthermore assess the potential risk associated with an analyst work unit based on similar threat, e.g., based on similar malware families and/or known breaches. [0056] In an example, an input generator that is adapted to generate an input for an LLM based processor may generate the following natural language prompt:
Here arc some data from a security alert. Compose a concise story of what happened, as if it were a security risk report:
Risk: High Event 1 title: Tor
Event 1 text: You are sending traffic to the Tor network
Event 2 title: Disabling security tools
Event 2 data "commandLineArguments": "sudo spctl -master-disable"
[0057] In response to the above example input, the LLM based processor may generate the following example analyst summary output:
Title:
High-Risk Security Incident Involving Tor Network and Disabling of Security Tools
Summary:
A high-risk security incident was detected on the network, involving the use of Tor for anonymizing traffic and a command to disable security tools. Immediate action is required to mitigate the potential threat.
Details:
1. Tor Network Utilization: The security system detected traffic being sent to the Tor network, which is known for providing anonymity to users. The use of Tor may indicate attempts to hide malicious activities or exfiltrate sensitive data from the organization.
2. Disabling Security Tools: A command was executed on the system to disable security tools, which is a potential indicator of compromise. The specific command executed was "sudo spctl-master- disable." This command disables the System Policy Control, which weakens the system's protection against unauthorized applications and potentially malicious software.
Recommendations :
1. Immediately investigate the source of the Tor network traffic and disable it if unauthorized or malicious activity is confirmed.
2. Identify and isolate the system(s) where the security' tools have been disabled and restore the System Policy Control using the command "sudo spctl -master-enable."
3. Conduct a thorough review of the affected systems for any signs of compromise or data exfiltration. 4. Enhance network security monitoring to detect similar incidents in the future and consider implementing security awareness training for employees to prevent unauthorized activities.
[0058] Analyst work unit data enhancement techniques herein can be adapted to automatically generate analyst summaries, which can significantly simplify the investigation process, lower the required expertise of the cybersecurity analyst, and reduce time to react. To deliver consistent and trusted results without a possible exploit surface, the neural network-based processor can be employed within a controlled environment, e.g., a secure server that does not have a connection to public netw orks. Analyst work unit data enhancement techniques herein can solve a problem of evolving underlying security products, because the tcclmiqucs will organically update along with updates to threat intelligence data stores and neural network -based processors such as LLM and NLP based models.
[0059] Certain implementations and embodiments of the disclosure will now be described more fully below w ith reference to the accompanying figures, in which various aspects are shown. However, the various aspects may be implemented in many different forms and should not be construed as limited to the implementations set forth herein. The disclosure encompasses variations of the embodiments, as described herein. Like numbers refer to like elements throughout.
[0060] FIG. 1 illustrates an example overview of techniques according to this disclosure, including anomaly detection in a netw ork, anomaly data enhancement, multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and adaptive learning based on security team interactions with the analyst work units, in accordance with various aspects of the technologies disclosed herein. [0061] FIG. 1 includes an example network 100 which can be monitored by various anomaly detection systems 131 A, 131B, 131C. The network 100 can comprise multiple different domains 110, 120, and multiple different computing assets. For example, the domain 110 can comprise assets 111 and 112, and tire domain 120 can comprise assets 121 and 122.
[0062] The illustrated number of domains and assets are examples only and it will be appreciated that ty pical networks may have many different domains and thousands of different computing assets. Example domains can include, e g . a file system / storage domain, an email system domain, a security system / firewall domain, and various
network /network equipment domains. Example assets can include, e.g., servers, laptops, user equipment (UEs), routers, firewalls, internet of things (loT) devices, etc.
[0063] Different computing assets can have different asset criticality values. For example, an asset criticality of a server that stores or processes a large volume of sensitive company data may be much higher that an asset criticality of an employee UE, such as a smartphone, that stores mainly the employee’s personal information. Furthermore, different computing assets can be located at different geographic locations. For example, the network 100 can computing assets in multiple different cities, regions, or countries.
[0064] The anomaly detection systems 131A, 131B, 131C can comprise, e.g., security monitoring systems that are configured to detect threats, security events, and other anomalies via various different telemetry sources, e.g., via at least two different telemetry sources, within the network 100. A wide variety of different anomaly detection systems are commercially available, and owners of advanced networks such as the network 100 may employ multiple anomaly detection systems 131A, 131B. 131C to alert their security response teams to potential threats to their network 100. The anomaly detection systems 131 A. 13 IB, 131C can output anomaly data, e.g., alerts for further investigation by a security' response team. [0065] Anomalies output from the anomaly detection systems 131 A, 131B, 131C can be processed according to the techniques described herein, referred to generally as alert fusion 130. FIG. 1 illustrates different processing techniques that can be applied, namely, anomaly data enhancement 132. multi-stage grouping 133 to generate analyst work units, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 135, and adaptive learning based on security team interactions 136 with the analyst work units. [0066] In an embodiment, the adaptive learning based on security team interactions 136 can be configured to provide feedback 137 to multi-stage grouping 133 and/or to analyst work unit prioritization and presentation 135, in order to adaptively update the multi-stage grouping 133 and/or the analyst work unit prioritization and presentation 135. Furthermore, in some embodiments, the multi-stage grouping 133 and/or the analyst work unit prioritization and presentation 135 can be directly modified by analyst / security team inputs and selections. [0067] Anomaly data enhancement 132 is described further in connection with FIG. 2 and FIG. 3, as well as the flowchart provided in FIG. 7. Multi-stage grouping 133 to generate analyst work units, analyst work unit data prioritization and presentation 135, and adaptive learning based on security team interactions 136 are described further in connection with FIG. 4, as well as the flowchart provided in FIG. 8. Analyst work unit data enhancement 134 is described further in connection with FIG. 5, as well as the flowchart provided in FIG. 9. [0068] An example server that can be configured to perform any of the functions illustrated in FIG. 1 is illustrated in FIG. 6. Such a server can optionally be included within the network 100, and anomaly data enhancement 132, multi-stage grouping 133 to generate analyst work units, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 135, and adaptive learning based on security team interactions 136 can likewise be performed within the network 100. [0069] FIG. 2 illustrates example anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein. The illustrated anomaly data enhancement architecture 200 can be used to implement the anomaly data enhancement 132 introduced in FIG. 1 in some embodiments. The anomaly data enhancement architecture 200 includes anomaly data 202, neural network 204, representative attribute(s) template 206, consistency check 208, and production environment 210, and anomaly data 212.
[0070] In an example according to FIG. 2, a training / template generation stage can be initiated by, e.g., detecting a new anomaly type, or a change in a schema of an existing anomaly type. During the training / template generation stage, the anomaly data 202 can be provided as an input to the neural network 204, and the neural network can generate the representative attribute(s) template 206 based on the anomaly data 202. The anomaly data 202 can represent an anomaly having an anomaly type and output by an anomaly detection system 131 A. The representative attribute(s) template 206 can comprise attributes determined by the neural network 204 to be useful for analyzing and resolving the anomaly represented by the anomaly data 202, based on threat intelligence information and/or prior anomaly investigations of similar anomalies.
[0071] The template generation process can be repeated according to multiple repetition cycles at the training / template generation stage, using different first instances of the anomaly data 202 as inputs to the neural network 204, and resulting in multiple instances of the representative attribute(s) template 206.
[0072] The multiple instances of the representative attribute(s) template 206 can then be compared at consistency check 208. The consistency check 208 can check whether the multiple instances of the representative attribute(s) template 206 satisfy a consistency threshold. For example, using a consistency threshold of 75%, the consistency check 208 can determine whether the multiple instances of the representative attribute(s) template 206 are at least
75% consistent. Any consistency threshold can be used. According to some examples, consistency thresholds can be between 70% and 99%.
[0073] If the consistency check 208 is failed, then the training / template generation stage can continue, optionally re-using one or more instances of anomaly data 202, and the repeating cycles can result in training the neural network 204, thereby increasing the consistency of the multiple instances of the representative attribute(s) template 206.
Additional consistency checks can be performed on resulting instances of the representative attribute(s) template 206, and template generation can be repeated rmtil the consistency check 208 is passed.
[0074] In response to the consistency check 208 being passed, a consistent template from among the instances of the representative attribute(s) template 206 can be provided to the production environment 210. The production environment 210 can comprise, e.g., a security system that monitors a network 100 in order to gather information regarding anomalies detected by anomaly detection systems such as 131A, 131B and 131C.
[0075] In response to receiving a representative attribute(s) template 206, the production environment 210 can begin using the representative attribute(s) template 206 to collect the representative attribute(s) designated by the representative attribute(s) template 206. For example, for each further/second instance within the network 100 of the anomaly associated with the representative attribute(s) template 206, the production environment 210 can collect the representative attribute(s) designated by the representative attribute(s) template 206. The production environment 210 can store or output the resulting anomaly data 212 for use by multiple systems, e.g., for further training of the neural network 204 and/or for further processing by multi-stage grouping 133, as described herein.
[0076] The anomaly data 212 can be provided to the neural network 204 for further training of the neural network 204. For example, the production environment 210 can output the anomaly data 212, and the neural network 204 can process the anomaly data 212, in order to further increase consistency of subsequent training / template generation stages.
[0077] FIG. 3 illustrates a detailed example of the anomaly data enhancement introduced in FIG. 2, in accordance with various aspects of the technologies disclosed herein. In FIG. 3, example anomaly data 302 represents an instance
of the anomaly data 202 introduced in FIG. 2, example neural network 304 represents an instance of the neural network 204 introduced in FIG. 2, example representative attribute(s) template 306 represents an instance of the representative attribute(s) template 206 introduced in FIG. 2, and example anomaly data 312 represents an instance of the anomaly data 212 introduced in FIG. 2. [0078] In FIG. 3, the anomaly data 302 comprises the detected security event, “Modified Windows Defender Real- Time Protection Settings.” Furthermore, anomaly data 312 that is or has been used to train the neural network 304 includes:
“process”: “C:\Program Files\Microsoft Office\Root\Officel6\EXCEL.EXE;
“shal”: "6175bb755cl2ced423cec7cea3234a4993fc67ce”, “registry”: “\REGISTRY\MACNINE\SOFTWARE\policies\Microsoft\WindowsDefender\Real-Time
Protection\DisableBehaviorMonitoring = 1”
“Sha256”: “755cl297315cd03f35f4b77db06f252075oc50b867615d3b51aff9afbbe5a2bc”,
“properties”: {
“copyright”: “Microsoft Corporation. All rights reserved ”, “file version” : “6.1.7600.16385”,
“product”: “Microsoft Windows Operation System”,
“product version” : “6.1.7600.16385”
[0079] The neural network 304 can generate the representative attribute(s) template 306 based on the anomaly data 302 and the anomaly data 312. The neural network 304 can select representative attributes from the anomaly data 302 and the anomaly data 312 to include in the representative attribute(s) template 306. In the illustrated example, the neural network 304 has identified the registry information from anomaly data 312 as a representative attribute for analy zing and resolving the anomaly associated with anomaly data 302. The neural netw ork 304 has configured the representative attribute(s) template 306 to include a notification, “The System Has Detected a Modification In The Windows Defender Real-Time Protection Settings via die “{registry}” Key.” The neural network 304 has configured the representative attribute(s) template 306 to furthermore include the identified representative attributes, “registry”:
“\REGISTRY\MACNINE\SOFTWARE\policies\Microsoft\ WindowsDefendefi Real-Time
Protection\DisableBehaviorMonitoring = 1”.
[0080] FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein. FIG. 4 includes anomalies 410, multi-stage grouping 420, first stage input(s) 435, second stage input} s) 445, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 450, prioritization inputs 455, and analyst
[0081] FIG. 4 illustrates example multi-stage grouping to generate analyst work units, analyst work unit data enhancement, analyst work unit data prioritization and presentation, and security team interactions with the analyst w ork units, in accordance with various aspects of the technologies disclosed herein. FIG. 4 includes anomalies 410, multi-stage grouping 420, first stage input(s) 435, second stage input} s) 445, analyst work unit data enhancement 134, analyst work unit data prioritization and presentation 450, prioritization inputs 455, and analyst interactions 456.
[0082] The anomalies 410 include example anomalies 411. 412, 413, 414. and 415... The anomalies 410 represent anomalies that can be output from the anomaly detection systems 131 A, 131B. 134C introduced in FIG. 1. The
anomalies 410 can therefore include different anomalies detected via two or more different telemetry sources. The anomalies 410 can be received as inputs at the multi-stage grouping 420.
[0083] The multi-stage grouping 420 can implement multi-stage grouping 133, introduced in FIG. 1, in some embodiments. The multi-stage grouping 420 comprises first stage 430, second stage 440, and adaptive learning 447. The first stage 430 includes example first stage groupings, including threat occurrence group 431, threat occurrence group 432, and threat occurrence group 431... The threat occurrence group 431 can include, e.g., a group comprising anomaly 411 and anomaly 412, which resulted from grouping/clustering operations performed by the first stage 430. The threat occurrence group 432 can include, e.g., a group comprising anomaly 413, which resulted from grouping/clustering operations performed by the first stage 430. The threat occurrence group 433 can include, e.g., a group comprising anomaly 414 and 415, which resulted from grouping/clustering operations performed by the first stage 430.
[0084] The threat occurrence groups 431, 432, 433... can be generated by the first stage 430 based on data associated with the anomalies 410, first stage inputs 435, and/or adaptive learning inputs from adaptive learning 447. The first stage inputs 435 can include, e.g., threat intelligence records from a threat intelligence database. The first stage 430 can be configured to identify, based on the anomalies 410 and the threat intelligence records, which of the anomalies 410 are related by being associated with a common threat occurrence. The first stage 430 can then create a threat occurrence group, e.g., threat occurrence group 431, corresponding to the common threat occurrence, and the first stage 430 can link the related anomalies, e.g., the anomalies 411. 412, to the threat occurrence group 431.
[0085] The second stage 440 includes example second stage groupings, including analyst work unit 441, and analyst work unit 442... The analyst work unit 441 can include, e.g., a group comprising threat occurrence group 431 and tirreat occurrence group 432, which resulted from grouping/clustering operations performed by the second stage 440. The analyst work unit 442 can include, e.g., a group comprising threat occurrence group 433, which resulted from grouping/clustering operations performed by the second stage 440.
[0086] The analyst work unit 441, 442... can be generated by the second stage 440 based on data associated with the threat occurrence groups 431, 432, 433... , the respective anomalies 410 included in the respective threat occurrence groups 431, 432, 433, the second stage inputs 445, and/or adaptive learning inputs from adaptive learning 447. The second stage inputs 445 can include, e.g., an asset inventory comprising asset information for different assets 111, 112, 121, 122 in a network 100. The asset inventory can comprise information such as asset type, asset value, asset geographic location, and/or asset criticality. The second stage 440 can be configured to identify, based on the threat occurrence groups 431, 432, 433 and the asset inventory, which of the threat occurrence groups 431, 432, 433 are related by being associated with a common group of assets. The second stage 440 can then create an analyst work unit, e.g., analyst work unit 441, corresponding to the common group of assets, and the second stage 440 can link the related threat occurrence groups, e.g., the threat occurrence groups 431, 432, to the analyst work unit 441.
[0087] The analyst work unit data enhancement 134 is introduced in FIG. 1. In some embodiments, the analyst work unit data enhancement 134 can be configured according to FIG. 5. The analyst work unit data enhancement 134 can be configmed to generate analyst summaries of analyst work units such as 441, 442 output from the multi-stage grouping 420. The resulting analyst summaries 451 (for analyst work unit 442). and 452 (for analyst work unit 442) can be supplied to the analyst work unit data prioritization and presentation 450.
[0088] Analyst work unit data prioritization and presentation 450 can prioritize and present the analyst summaries 451, 452 output from the analyst work unit data enhancement 134. Alternatively, e.g., in embodiments wherein analyst summaries 451, 452 are not generated, the analyst work unit data prioritization and presentation 450 can prioritize and present the analyst work units such as 441, 442 output from the multi-stage grouping 420. The analyst summaries 451, 452 or the analyst work units 441, 442 can be prioritized according to prioritization inputs 455. The prioritization inputs 455 can include, e.g., asset criticality values of assets affected by any analyst work unit, or confidence values associated with anomalies included in an analyst work unit, or any other data reflecting the level of risk or time urgency associated with an analyst work unit.
[0089] The analyst summaries 451, 452 or the analyst work units 441, 442 can be presented to analysts, e.g., via a user interface (UI), and the analyst work unit data prioritization and presentation 450 can be configured to receive analyst interactions 456 via the UI. In some embodiments, analyst work unit data prioritization and presentation 450 can be configured to adaptively learn, from analyst interactions 456, additional prioritization data that can be used to prioritize subsequent analyst summaries 451, 452 or the analyst work units 441. 442. Furthermore, the analyst work unit data prioritization and presentation 450 can be configured to supply analyst interaction data 460 to adaptive learning 447, and the adaptive learning 447 can generate inputs for use by first stage 430 and/or second stage 440 in connection with grouping operations.
[0090] FIG. 5 illustrates example analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein. FIG. 5 includes analyst work unit 441, analyst work unit data enhancement 500, analyst summary 452, and threat intelligence data store 540. The analyst work unit 441 can comprise, e.g., the analyst work unit 441 output from the multi-stage grouping 420 as illustrated in FIG. 4. The example analyst work unit data enhancement 500 illustrated in FIG. 5 can implement the analyst work rmit data enhancement 134 introduced in FIG.
1 and FIG. 4. The analyst summary 452 can comprise, e.g., the analyst summary 452 illustrated in FIG. 4. The threat intelligence data store 540 can comprise, e.g., an internal threat intelligence data store coupled to a same LAN as a server comprising the analyst work unit data enhancement 500, or an external threat intelligence data store accessed by the analyst work mrit data enhancement 500 via a remote connection to an external network other than the LAN which comprises the threat intelligence data store 540.
[0091] The example analyst work unit data enhancement 500 comprises nearest neighbor search 510, neural network input generator 520, and neural network 530. In example operations according to FIG. 5, an analyst work unit 441 output from, e.g., a multi-stage grouping process such as illustrated in FIG. 4 can be processed by analyst work unit data enhancement 500 to thereby generate the analyst summary 452. First, the analyst work unit data enhancement 500 can employ nearest neighbors search 510 to perform a nearest neighbors search in the threat intelligence data store 540. The nearest neighbors search can identify one or more nearest neighbor threats, in the threat intelligence data store 540, that have comparatively higher, or highest similarity to the analyst work mrit 441. The result of the nearest neighbors search are labeled as threat 511 in FIG. 5, and also referred to herein as a similar threat 511.
[0092] After identifying the similar threat 511, the analyst work unit data enhancement 500 can use the neural network input generator 520 to generate inputs for the neural network 530. For example, the neural network input generator 520 can configure a command 521, analyst work rmit data 522, and threat data 523. The command 521 can optionally comprise a natural language command such as, "‘Here are some data from a security alert. Compose a
concise story of what happened, as if it were a security risk report." The analyst work unit data 522 can comprise events or attributes from the input analyst work unit 441, such as:
Event 1 title: Tor
Event 1 text: You are sending traffic to the Tor network Event 2 title: Disabling security tools
Event 2 data "commandLineArguments": "sudo spctl -master-disable"
[0093] The threat data 523 can comprise events or attributes from the similar threat 511, such as a risk level (high, medium, or low) associated with the similar threat.
[0094] The neural network 530 can generate the analyst summary 452 in response to the inputs 521, 522, 523 from the neural network input generator 520. Example analyst summaries can include, inter alia, a human readable summary comprising a title, summary, details, and/or recommendations, as provided herein.
[0095] In some embodiments, e.g., embodiments that do not implement the multi-stage grouping processes described herein, an input to the analyst work unit data enhancement 500 can comprise an anomaly such as one of the anomalies 410, or a threat occurrence group such as one of the threat occurrence groups 431 , 432. 433, in which case the generated analyst summary’ 452 output by’ the analyst work unit data enhancement 500 can comprise a summary of the anomaly or threat occurrence group, rather than the analyst summary’ 452 for the analyst work unit 441.
[0096] FIG. 6 illustrates an example computer hardware architecture that can implement a server computer 600. in accordance with various aspects of the technologies disclosed herein. The computer architecture shown in FIG. 6 illustrates a conventional server computer, workstation, desktop computer, laptop, tablet, network appliance, e-reader, smartphone, or other computing device, and can be utilized to execute any of the software components presented herein.
[0097] The server computer 600 includes a baseboard 602, or “motherboard,” which is a printed circuit board to which a multitude of components or devices can be connected by way of a system bus or other electrical communication paths. In one illustrative configuration, one or more central processing units (“CPUs”) 604 operate in conjunction with a chipset 606. The CPUs 604 can be standard programmable processors that perform arithmetic and logical operations necessary for tire operation of the server computer 600.
[0098] The CPUs 604 perform operations by transitioning from one discrete, physical state to the next through the manipulation of switching elements that differentiate between and change these states. Switching elements generally include electronic circuits that maintain one of two binary' states, such as flip-flops, and electronic circuits that provide an output state based on the logical combination of the states of one or more other switching elements, such as logic gates. These basic switching elements can be combined to create more complex logic circuits, including registers, adders-subtractors, arithmetic logic units, floating-point units, and the like.
[0099] The chipset 606 provides an interface between the CPUs 604 and the remainder of the components and devices on the baseboard 602. The chipset 606 can provide an interface to a RAM 608, used as the main memory' in the server computer 600. The chipset 606 can further provide an interface to a computer-readable storage medium such as a read-only memory (“ROM”) 610 or non-volatile RAM (“NVRAM”) for storing basic routines that help to startup the server computer 600 and to transfer information between the various components and devices. The ROM 610 or NVRAM can also store other software components necessary for the operation of the server computer 600 in accordance with the configurations described herein.
[0100] The server computer 600 can operate in a networked environment using logical coimections to remote computing devices and computer systems through a network, such as the LAN 624. The chipset 606 can include functionality for providing network connectivity through a NIC 612, such as a gigabit Ethernet adapter. The NIC 612 is capable of connecting the server computer 600 to other computing devices over the network 624. It should be appreciated that multiple NICs 612 can be present in the server computer 600, comiecting the computer to other types of networks and remote computer systems.
[0101] The server computer 600 can be connected to a storage device 618 that provides non-volatile storage for the server computer 600. The storage device 618 can store an operating system 620, programs 622, and data, to implement any of the various components described in detail herein. The storage device 618 can be connected to the server computer 600 through a storage controller 614 connected to the chipset 606. The storage device 618 can comprise one or more physical storage units. The storage controller 614 can interface with the physical storage units through a serial attached SCSI (“SAS”) interface, a serial advanced technology attachment (“SATA”) interface, a fiber channel (“FC”) interface, or other type of interface for physically connecting and transferring data between computers and physical storage units. [0102] The server computer 600 can store data on the storage device 618 by transforming the physical state of the physical storage units to reflect the information being stored. The specific transformation of physical state can depend on various factors, in different embodiments of this description. Examples of such factors can include, but are not limited to, the technology used to implement the physical storage units, whether the storage device 618 is characterized as primary' or secondary storage, and the like. [0103] For example, the server computer 600 can store information to the storage device 618 by issuing instructions through the storage controller 614 to alter the magnetic characteristics of a particular location within a magnetic disk drive unit, the reflective or refractive characteristics of a particular location in an optical storage unit, or the electrical characteristics of a particular capacitor, transistor, or other discrete component in a solid-state storage unit. Other transformations of physical media are possible without departing from the scope and spirit of the present description, with the foregoing examples provided only to facilitate this description. The server computer 600 can further read information from the storage device 618 by detecting the physical states or characteristics of one or more particular locations within the physical storage units.
[0104] In addition to the mass storage device 618 described above, the server computer 600 can have access to other computer-readable storage media to store and retrieve information, such as program modules, data structures, or other data. It should be appreciated by those skilled in the art that computer-readable storage media is any available media that provides for the non-transitory storage of data and that can be accessed by the server computer 600. In some examples, the operations performed by the computing elements illustrated in FIGS. 1-5, and or any components included therein, may be supported by one or more devices similar to server computer 600.
[0105] By way of example, and not limitation, computer-readable storage media can include volatile and non- volatile, removable and non-removable media implemented in any method or technology. Computer-readable storage media includes, but is not limited to, RAM, ROM, erasable programmable ROM (“EPROM”), electrically -erasable programmable ROM (“EEPROM”), flash memory or other solid-state memory' technology, compact disc ROM (“CD- ROM”). digital versatile disk (“DVD”), high definition DVD (“HD-DVD”). BLU-RAY. or other optical storage.
magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store the desired information in a non-transitory fashion.
[0106] As mentioned briefly above, the storage device 618 can store an operating system 620 utilized to control the operation of the server computer 600. According to one embodiment, the operating system comprises the LINUX operating system. According to another embodiment, the operating system comprises the WINDOWS® SERVER operating system from MICROSOFT Corporation of Redmond, Washington. According to further embodiments, the operating system can comprise the UNIX operating system or one of its variants. It should be appreciated that other operating systems can also be utilized. The storage device 618 can store other system or application programs and data utilized by the server computer 600. [0107] In one embodiment, the storage device 618 or other computer-readable storage media is encoded with computer-executable instructions which, when loaded into the server computer 600, transform the computer from a general-purpose computing system into a special-purpose computer capable of implementing the embodiments described herein. These computer-executable instructions transform the server computer 600 by specifying how the CPUs 604 transition between states, as described above. According to one embodiment, the server computer 600 has access to computer -readable storage media storing computer-executable instructions which, when executed by the server computer 600. perform the various processes described with regard to FIGS. 7, 8, and 9. The server computer 600 can also include computer-readable storage media having instructions stored thereupon for performing any of the other computer-implemented operations described herein.
[0108] The server computer 600 can also include one or more input/output controllers 61 for receiving and processing input from a number of input devices, such as a keyboard, a mouse, a touchpad, a touch screen, an electronic sty lus, or other type of input device. Similarly, an input/output controller 616 can provide output to a display, such as a computer monitor, a flat-panel display , a digital projector, a printer, or other ty pe of output device. It will be appreciated that die server computer 600 might not include all of the components shown in FIG. 6, can include other components that are not explicitly shown in FIG. 6, or might utilize an architecture completely different than that shown in FIG. 6.
[0109] FIGS. 7, 8, and 9 are flow diagrams of example methods 700, 800, 900 performed at least partly by a computing device, such as the server computer 600. The logical operations described herein with respect to FIGS. 7, 8 and 9 may be implemented (1) as a sequence of computer-implemented acts or program modules running on a computing system and/or (2) as interconnected machine logic circuits or circuit modules within the computing system. In some examples, the methods 700, 800, and 900 may be performed by a system comprising one or more processors and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause die one or more processors to perform the methods 700, 800, and 900. [0110] The implementation of the various components described herein is a matter of choice dependent on the performance and other requirements of the computing system. Accordingly, the logical operations described herein are referred to variously as operations, structural devices, acts, or modules. These operations, structural devices, acts, and modules can be implemented in software, in firmware, in special purpose digital logic, and any combination thereof. It should also be appreciated that more or fewer operations might be performed than shown in FIGS. 7. 8. and 9 and described herein. These operations can also be performed in parallel, or in a different order than those described herein. Some or all of these operations can also be performed by components other than those specifically
identified. Although the techniques described in this disclosure is with reference to specific components, in other examples, the techniques may be implemented by less components, more components, different components, or any configuration of components.
[0111] FIG. 7 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with anomaly data enhancement, in accordance with various aspects of the technologies disclosed herein.
At operation 702, the server computer 600 can optionally detect a trigger which initiates the training / template generation stage 704. The detected trigger can comprise, e.g., a occurrence of a new type of anomaly in a network, or changes in an anomaly schema associated with previously detected anomalies.
[0112] At 704, the server computer 600 can perform the training / template generation stage 704, which results in generating a template that can be deployed to a production environment, wherein the production environment can use the template to collect attribute representative attributes for an anomaly associated with the template. The training / template generation stage 704 can optionally be repeated multiple times, as described herein.
[0113] At 706, the server computer 600 can receive security event information associated with an anomaly. The security event information can comprise, e.g.. an identification of an anomaly and at least one attribute associated with the anomaly and detected in a network 100.
[0114] At 708, the server computer 600 can identify representative attribute(s) of the anomaly identified at 706.
For example, the training / template generation stage 704 can provide the security’ event information received at 706 as an input to a neural network -based processor, e.g., the neural network 204 illustrated in FIG. 2, along with an instruction for the neural network -based processor to identify representative attribute(s). The neural network-based processor can comprise, e.g., an NLP or a LLM-based processor. The neural network-based processor can be configured to identify at least one representative attribute based on the input. The at least one representative attribute can be determined by the neural network -based processor to represent the anomaly for security' analyses of instances of the anomaly.
[0115] The at least one representative attribute can include an attribute previously included in the security' event information received at 706, or the at least one representative attribute can include a different attribute, other than the attributes previously included in the security' event information. In some embodiments, the at least one representative attribute can include multiple representative attributes, optionally including both attributes previously included in the security event information as well as attributes not previously included in the security event information.
[0116] At 710, the server computer 600 can generate a template comprising the at least one representative attribute. In some embodiments, the template can be generated directly by the neural network-based processor. In other embodiments, the template can be generated separately, and can include the at least one representative attribute output by the neural network-based processor. An example template is illustrated herein by the representative attribute(s) template 306 in FIG. 3.
[0117] At 712, the server computer 600 can perform a consistency check. The consistency check can comprise comparing multiple templates generated via multiple repeated cycles of the training / template generation stage 704, in order to determine consistency of the multiple templates.
[0118] The training / template generation stage 704 can optionally be repeated for each of multiple respective first instances the anomaly identified at 706. The receiving at 706. the identifying at 708, and the generating at 710 can be repeated in order to produce multiple templates, and the consistency check at 712 can be repeated, optionally at each
cycle, to compare the multiple templates. In some embodiments, at least one of the multiple repetition cycles can be triggered in response to a change in a detection engine, such as an anomaly detection system 131 A, configured to detect the anomaly in the network 100, wherein the change in the detection engine results in a change in the at least one attribute associated with the anomaly. [0119] At 714, if the multiple templates pass the consistency check at 712, e.g., if the multiple templates meet a predefined similarity threshold, then the training / template generation stage 704 can be completed and, at 716, a representative consistency -checked template can be deployed to a production environment. Therefore, deploying the template to the production environment can be performed in response to the consistency of the of the multiple templates satisfying a consistency threshold. However, if the multiple templates do not pass the consistency check, then additional repetitions of the training / template generation stage 704 can be performed, using additional instances of the anomaly identified at 706.
[0120] At 716. a template output from the training / template generation stage 704 can be deployed to a production environment configured to automatically detect the instances of the anomaly in the network. At 718, production environment can be configured to use the template to define at least one collected attribute that is collected for the security analyses of the instances of the anomaly.
[0121] Instances of the anomaly occurring after the template is deployed at 716 may be referred to as second instances of the anomaly, while instances of the anomaly occurring before the template is deployed may be referred to herein as first instances of the anomaly. In some embodiments, the security event information resulting from a second instance of the anomaly can be fed back from the production environment to operation 706, so that second security event information comprising a collected attribute associated with a second instance of the anomaly can be used during further repetitions of the training / template generation stage 704, to increase / enhance consistency of templates generated by the neural net ork-based processor.
[0122] FIG. 8 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with multi-stage grouping to generate analyst work units, analyst work unit data prioritization and presentation, and security team interactions with die analyst work units, in accordance with various aspects of the technologies disclosed herein. At operation 802, the server computer 600 can detect anomalies in a network 100. Alternatively, the server computer 600 can receive anomaly data representing anomalies detected, e.g., at anomaly detections systems 131 A, 13 IB, 131C. The anomalies can therefore include different anomalies detected via multiple different telemetry sources. As shown in FIG. 1, the network 100 can comprise multiple different domains 110, 120 and multiple different computing assets such as 111, 112, 121, and 122. Different anomalies can be detected with different confidence values. Furthermore, different computing assets of the multiple different computing assets 111, 112, 121, and 122 can be associated with different asset criticality values, e.g., as described in connection with FIG. 1.
[0123] At 804, the server computer 600 can perform a first stage grouping of a multi-stage grouping process, whereby threat occurrence groups can be generated. For example, the server computer 600 can analyze the anomalies received at 602 based on threat intelligence information, in order to group the anomalies into multiple different threat occurrence groups, wherein each threat occurrence group comprises one or more of the anomalies. Operation 804 can use any desired grouping approach, e.g., any unsupervised clustering or community selection method, including but not limited to a spectral clustering process or a modularity clustering process.
[0124] The first stage grouping at 804 can use any of a variety of different variables for grouping. Anomalies occurring in different domains and in different computing assets can potentially be grouped together in a threat occurrence group. For example, at least one first threat occurrence group can comprise two or more different anomalies related to tw o or more different domains. At least one second threat occurrence group can comprise two or more different anomalies related to two or more different computing assets.
[0125] At 806, the server computer 600 can perform a second stage grouping of the multi-stage grouping process, whereby analyst work units can be generated. For example, multiple different threat occurrence groups output from operation 804 can be grouped into multiple different analyst work units, wherein each analyst work unit comprises one or more of the threat occurrence groups. Operation 806 can use any desired grouping approach, e.g., grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise applying any unsupervised clustering or community selection method including by not limited to spectral clustering processes and modularity clustering processes.
[0126] The second stage grouping at 806 can use any of a variety of different variables for grouping. For example, in some embodiments, grouping the multiple different threat occurrence groups into multiple different analyst work units can comprise grouping the multiple different threat occurrence groups according to geographic locations of respective computing assets affected by respective anomalies included in respective analyst work units.
[0127] In some embodiments, grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise grouping the multiple different threat occurrence groups according to threat types associated with respective anomalies included in respective analyst work units. [0128] In some embodiments, grouping the multiple different threat occurrence groups into multiple different analyst work units at 806 can comprise grouping the multiple different threat occurrence groups according to response types associated with respective anomalies included in respective analyst work units.
[0129] Operation 808 comprises analyst work unit data enhancement. For example, processes described in connection with FIG. 5 can be performed in order to generate analyst summaries of analyst work mrits output from operation 806.
[0130] Operation 810 comprises prioritizing analyst work units. Analyst work units, and optionally the analyst summaries generated at 808, can be prioritized based on any of the variables disclosed herein. In some embodiments, operation 810 can comprise prioritizing multiple different analyst work units based at least in part on respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units. In some embodiments, operation 810 can comprise prioritizing multiple different analyst work units based at least in part on respective confidence values of respective anomalies included in respective analyst work units.
[0131] Operation 812 comprises displaying analyst work rmits. For example, a prioritized display can be provided, the prioritized display comprising the multiple different analyst work units generated at 804, 806, and 808, and prioritized at 810. [0132] Operation 814 comprises receiving analyst interactions with analyst work mrits. For example, one or more analyst interactions can be received via the prioritized display provided at 812, and the analyst interactions can result in in analyst interaction data.
[0133] Operation 816 comprises storing analyst interaction data, e.g.. storing the analyst interaction data received at 814. Operation 818 comprises outputting the analyst interaction data for use in adaptive learning, e.g., for use in
adaptive learning applicable to subsequent grouping operations 804, 806 to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units. The analyst interaction data can furthermore be for use in adaptive learning applicable to subsequent prioritizing operations at 810, to facilitate subsequent prioritizing subsequent analyst work units. [0134] FIG. 9 is a flow diagram that illustrates an example method performed by a server computer 600 in connection with analyst work unit data enhancement, in accordance with various aspects of the technologies disclosed herein. At operation 902, the server computer 600 can receive an analyst work unit. The analyst work unit can comprise an output of a second stage of a multistage grouping process. In example embodiments, the analyst work unit can comprise one or more threat occurrence groups, and each of the one or more threat occurrence groups can comprise one or more detected anomalies detected in a network 100 comprising multiple different computing assets.
[0135] At operation 904, the server computer 600 can identify, within a data store comprising computing threat information, e.g., within a threat intelligence data store 540 such as illustrated in FIG. 5 which can optionally implement an internal or an external type database, at least one similar threat that has higher similarity to the analyst work unit (received at 902) than one or more other threats identified in the data store. In some embodiments. identifying the similar threat can comprise performing a nearest neighbor search on the data store, to find a threat in the data store that comprises, e.g., a highest similarity to the analyst work unit.
[0136] At operation 906, the server computer 600 can configure inputs for a neural network-based generator. For example, the server computer 600 can configure a natural language command, one or more first events based on the analyst work unit, one or more second events based on the at least one similar threat identified at 904, and/or a risk level based on the at least one similar threat identified at 904. These inputs can be provided to the neural networkbased generator to initiate operation 908.
[0137] At 908, the server computer 600 can generate an analyst summary of the analyst work unit based on the analy st work unit and the at least one similar threat. Generating the analy st summary7 can comprise using a neural network -based generator to process inputs generated at 906, e.g., natural language command, one or more first events based on the analyst work unit, one or more second events based on the at least one similar threat identified at 904, and/or a risk level based on the at least one similar threat identified at 904. In some embodiments, the neural networkbased generator can be configured to use at least one of NLP or a LLM.
[0138] When the threat intelligence data store accessed at operation 904 comprises threat response playbook information, generating the analyst summary at 908 can comprise generating, based on the threat response playbook information associated with the similar threat, a next action recommendation associated with the analyst work unit.
When the similar threat is associated with a risk level, generating the analyst summary at 908 can comprise providing the risk level to the neural network-based generator so that die network-based generator analyst summary can incorporate the risk level in the analyst summary.
[0139] At 910, the server computer 600 can output the analyst summary7 generated at 908. The analyst summaiy can comprise, e.g., one or more different sections corresponding to the one or more first events based on the analyst work unit, and other information as described herein.
[0140] In summary, techniques described herein for extended detection and response to security anomalies in computing networks can perform automated analysis of anomalies occurring in different telemetry sources in a computer network, in order to synthesize the anomalies into analyst work units that are surfaced for further analysis
by security response teams. Anomalies can initially be processed in order to identify and collect extended anomaly data. The extended anomaly data can then be used to group the anomalies according to a multi-stage grouping process which produces analyst work units. The analyst work units can be processed to produce analyst summaries that assist with analysis and response. Furthermore, the analyst work units can be prioritized for further analysis, and analyst interactions with the prioritized analyst work units can be used to influence subsequent anomaly grouping operations.
[0141] While the invention is described with respect to the specific examples, it is to be understood that the scope of the invention is not limited to these specific examples. Since other modifications and changes varied to fit particular operating requirements and environments will be apparent to those skilled in the art, the invention is not considered limited to the example chosen for purposes of disclosure and covers all changes and modifications which do not constitute departures from the true spirit and scope of this invention.
[0142] Although the application describes embodiments having specific structural features and/or methodological acts, it is to be understood that the claims are not necessarily limited to the specific features or acts described. Rather, the specific features and acts are merely illustrative some embodiments that fall within the scope of the claims of the application.
Claims
1. A method comprising: detecting anomalies in a network, the network comprising multiple different domains and multiple different computing assets, wherein different anomalies are detected with different confidence values, and wherein different computing assets of the multiple different computing assets are associated with different asset criticality values: analyzing the anomalies based on threat intelligence information in order to group the anomalies into multiple different threat occurrence groups, wherein each threat occurrence group comprises one or more of the anomalies; grouping the multiple different threat occurrence groups into multiple different analyst work units, wherein each analyst work unit comprises one or more of the threat occurrence groups; prioritizing the multiple different analyst work units based at least in part on: respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units; and respective confidence values of respective anomalies included in respective analyst work units; providing a prioritized display of the multiple different analyst w ork units; receiving one or more analyst interactions via the prioritized display of the multiple different analyst w ork units, resulting in analyst interaction data; and storing the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
2. The method of claim 1, wherein grouping the multiple different threat occurrence groups into multiple different analyst work miits comprises grouping the multiple different threat occurrence groups according to geographic locations of respective computing assets affected by respective anomalies included in respective analyst work units.
3. The method of claim 1 or 2, wherein grouping the multiple different threat occurrence groups into multiple different analyst work miits comprises grouping the multiple different threat occurrence groups according to threat types associated with respective anomalies included in respective analyst work units.
4. The method of any of claims 1 to 3, wherein grouping the multiple different threat occurrence groups into multiple different analyst work units comprises grouping the multiple different threat occurrence groups according to response types associated with respective anomalies included in respective analyst work miits.
5. The method of any of claims 1 to 4, wherein the analyst interaction data is furthermore for use in subsequent prioritizing operations to facilitate subsequent prioritizing subsequent analyst work units.
6. The method of any of claims 1 to 5, wherein the grouping the multiple different threat occurrence groups into multiple different analyst work units comprises applying a spectral clustering process or a modularity clustering process.
7. The method of any of claims 1 to 6, wherein the different anomalies are detected via two or more different telemetry sources.
8. A device comprising: one or more processors; one or more computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: detecting anomalies in a network, the network comprising multiple different domains and multiple different computing assets, wherein different anomalies are detected with different confidence values, and w herein different computing assets of the multiple different computing assets are associated with different asset criticality values; analyzing the anomalies based on threat intelligence information in order to group the anomalies into multiple different threat occurrence groups, w herein each threat occurrence group comprises one or more of the anomalies; grouping the multiple different threat occurrence groups into multiple different analyst work units, wherein each analyst work unit comprises one or more of the threat occurrence groups; prioritizing the multiple different analyst work units based at least in part on: respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units; and respective confidence values of respective anomalies included in respective analyst w ork units; providing a prioritized display of the multiple different analyst work units; receiving one or more analyst interactions via the prioritized display of the multiple different analyst work units, resulting in analyst interaction data; and storing tire analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
9. The device of claim 8, wherein grouping the multiple different threat occurrence groups into multiple different analyst work units comprises grouping the multiple different threat occurrence groups according to geographic locations of respective computing assets affected by respective anomalies included in respective analyst work units.
10. The device of claim 8 or 9, wherein grouping the multiple different threat occurrence groups into multiple different analyst work units comprises grouping the multiple different threat occurrence groups according to threat types associated with respective anomalies included in respective analyst work units.
11. The device of any of claims 8 to 10, wherein grouping the multiple different threat occurrence groups into multiple different analyst work units comprises grouping the multiple different threat occurrence groups according to response types associated with respective anomalies included in respective analyst work units.
12. The device of any of claims 8 to 11, wherein the analyst interaction data is furthermore for use in subsequent prioritizing operations to facilitate subsequent prioritizing subsequent analyst work units.
13. The device of any of claims 8 to 12, wherein the grouping the multiple different threat occurrence groups into multiple different analyst work units comprises applying a spectral clustering process or a modularity clustering process.
14. The device of any of claims 8 to 13, wherein the different anomalies are detected via two or more different telemetry sources.
15. A method comprising: detecting anomalies in a network: analyzing the anomalies based on threat intelligence information in order to group the anomalies into multiple different threat occurrence groups; grouping the multiple different threat occurrence groups into multiple different analyst work units; prioritizing the multiple different analyst work units, resulting in prioritized analyst work units; receiving one or more analyst interactions with the prioritized analyst work units, resulting in analyst interaction data; and storing the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
16. The method of claim 15, wherein prioritizing the multiple different analyst work units comprising prioritizing the multiple different analyst work units according to respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units.
17. The method of claim 15 or 16, wherein prioritizing the multiple different analyst work units comprising prioritizing the multiple different analyst work units according to respective confidence values of respective anomalies included in respective analyst work units.
18. The method of any of claims 15 to 17, wherein grouping the multiple different threat occurrence groups into multiple different analyst work units comprises grouping the multiple different threat occurrence groups according one or more of geographic locations, threat ty pes, or response types associated with respective analyst work units.
19. The method of any of claims 15 to 18, wherein the analyst interaction data is furthermore for use in subsequent prioritizing operations to facilitate subsequent prioritizing subsequent analyst work units.
20. The method of any of claims 15 to 19, wherein the grouping the multiple different threat occurrence groups into multiple different analyst work units comprises applying a spectral clustering process.
21. Apparatus comprising: means for detecting anomalies in a network, the netw ork comprising multiple different domains and multiple different computing assets, wherein different anomalies are detected with different confidence values, and wherein different computing assets of the multiple different computing assets are associated with different asset criticality values; means for analyzing the anomalies based on threat intelligence information in order to group the anomalies into multiple different threat occurrence groups, wherein each threat occurrence group comprises one or more of the anomalies; means for grouping the multiple different threat occurrence groups into multiple different analyst work units, wherein each analyst work unit comprises one or more of the threat occurrence groups; means for prioritizing the multiple different analyst work units based at least in part on: respective asset criticality values of respective computing assets affected by respective anomalies included in respective analyst work units; and respective confidence values of respective anomalies included in respective analyst work units; means for providing a prioritized display of the multiple different analyst work units; means for receiving one or more analyst interactions via the prioritized display of the multiple different analyst work units, resulting in analyst interaction data; and means for storing the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
22. The apparatus according to claim 21 further comprising means for implementing the method according to any of claims 2 to 7.
23. Apparatus comprising: means for detecting anomalies in a netw ork: means for analyzing the anomalies based on threat intelligence information in order to group the anomalies into multiple different threat occurrence groups: means for grouping the multiple different threat occurrence groups into multiple different analyst work units; means for prioritizing the multiple different analyst work units, resulting in prioritized analyst work units; means for receiving one or more analyst interactions with the prioritized analyst work units, resulting in analyst interaction data; and means for storing the analyst interaction data for use in subsequent grouping operations to facilitate grouping subsequent threat occurrence groups into subsequent analyst work units.
24. The apparatus according to claim 23 further comprising means for implementing the method according to any of claims 16 to 20.
25. A computer program, computer program product or computer readable medium comprising instructions which, when executed by a computer, cause the computer to carry out the steps of the method of any of claims 1 to 7 or 15 to 20.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363461374P | 2023-04-24 | 2023-04-24 | |
| US63/461,374 | 2023-04-24 | ||
| US18/231,816 | 2023-08-09 | ||
| US18/231,816 US20240356943A1 (en) | 2023-04-24 | 2023-08-09 | Alert fusion for extended detection and response to security anomalies |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2024226361A1 true WO2024226361A1 (en) | 2024-10-31 |
Family
ID=91186659
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/024887 Pending WO2024226361A1 (en) | 2023-04-24 | 2024-04-17 | Alert fusion for extended detection and response to security anomalies |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2024226361A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190362278A1 (en) * | 2018-05-26 | 2019-11-28 | Guavus, Inc. | Organization and asset hierarchy for incident prioritization |
| US20220224721A1 (en) * | 2021-01-13 | 2022-07-14 | Microsoft Technology Licensing, Llc | Ordering security incidents using alert diversity |
-
2024
- 2024-04-17 WO PCT/US2024/024887 patent/WO2024226361A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190362278A1 (en) * | 2018-05-26 | 2019-11-28 | Guavus, Inc. | Organization and asset hierarchy for incident prioritization |
| US20220224721A1 (en) * | 2021-01-13 | 2022-07-14 | Microsoft Technology Licensing, Llc | Ordering security incidents using alert diversity |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11902316B2 (en) | Real-time cybersecurity status system with event ticker | |
| US12301617B2 (en) | System and method for implementing an artificial intelligence security platform | |
| US11218510B2 (en) | Advanced cybersecurity threat mitigation using software supply chain analysis | |
| US11036867B2 (en) | Advanced rule analyzer to identify similarities in security rules, deduplicate rules, and generate new rules | |
| EP3841502B1 (en) | Enhancing cybersecurity and operational monitoring with alert confidence assignments | |
| US9210044B2 (en) | Automated remediation with an appliance | |
| JP7728968B2 (en) | Systems and methods for detecting malicious hands-on keyboard activity via machine learning | |
| US11765189B2 (en) | Building and maintaining cyber security threat detection models | |
| EP4091084B1 (en) | Endpoint security using an action prediction model | |
| Tariq et al. | Alert fatigue in security operations centres: Research challenges and opportunities | |
| CA3204098A1 (en) | Systems, devices, and methods for observing and/or securing data access to a computer network | |
| US20250117485A1 (en) | Artificial intelligence (ai)-based system for detecting malware in endpoint devices using a multi-source data fusion and method thereof | |
| Bellas et al. | A methodology for runtime detection and extraction of threat patterns | |
| Kurnia et al. | Toward Robust Security Orchestration and Automated Response in Security Operations Centers with a Hyper-Automation Approach Using Agentic Artificial Intelligence. | |
| US12381897B2 (en) | Systems and methods for automatically creating normalized security events in a cybersecurity threat detection and mitigation platform | |
| US20240356942A1 (en) | Incident descriptions for extended detection and response to security anomalies | |
| Sallay et al. | Intrusion detection alert management for high‐speed networks: current researches and applications | |
| WO2024226361A1 (en) | Alert fusion for extended detection and response to security anomalies | |
| WO2024226359A1 (en) | Incident descriptions for extended detection and response to security anomalies | |
| WO2024226356A1 (en) | Event descriptions for extended detection and response to security anomalies | |
| Preuveneers et al. | On the use of AutoML for combating alert fatigue in security operations centers | |
| US20250045385A1 (en) | System and method for terminating ransomware based on detection of anomalous data | |
| US20250023892A1 (en) | Determining the impact of malicious processes in it infrastructure | |
| US20250373647A1 (en) | Misconfiguration Detection and Prevention in a Data Fabric | |
| Doshi | Live log analysis using integrated SIEM and IDS using Machine Learning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24727536 Country of ref document: EP Kind code of ref document: A1 |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2024727536 Country of ref document: EP |