US20160300063A1 - Software vulnerabilities detection system and methods - Google Patents
Software vulnerabilities detection system and methods Download PDFInfo
- Publication number
- US20160300063A1 US20160300063A1 US14/460,636 US201414460636A US2016300063A1 US 20160300063 A1 US20160300063 A1 US 20160300063A1 US 201414460636 A US201414460636 A US 201414460636A US 2016300063 A1 US2016300063 A1 US 2016300063A1
- Authority
- US
- United States
- Prior art keywords
- instruction
- data
- unsafe
- software vulnerabilities
- security
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/57—Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
- G06F21/577—Assessing vulnerabilities and evaluating computer system security
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3604—Analysis of software for verifying properties of programs
- G06F11/3608—Analysis of software for verifying properties of programs using formal methods, e.g. model checking, abstract interpretation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
- G06F21/55—Detecting local intrusion or implementing counter-measures
- G06F21/56—Computer malware detection or handling, e.g. anti-virus arrangements
- G06F21/566—Dynamic detection, i.e. detection performed at run-time, e.g. emulation, suspicious activities
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/033—Test or assess software
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2221/00—Indexing scheme relating to security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F2221/03—Indexing scheme relating to G06F21/50, monitoring users, programs or devices to maintain the integrity of platforms
- G06F2221/034—Test or assess a computer or a system
Definitions
- U.S. Pat. No. 8,739,280 describes a context-sensitive taint analysis system. Taint processing applied to a tainted value of an application is identified and an output context of the application associated with output of the tainted value is determined. It is determined whether the taint processing is effective in mitigating a security vulnerability caused by the tainted value for the output context.
- U.S. Pat. No. 8,347,392 describes an apparatus and method for analyzing and supplementing a program to provide security.
- a computer readable storage medium has executable instructions to perform an automated analysis of program instructions.
- the automated analysis includes at least two analyses selected from an automated analysis of injection vulnerabilities, an automated analysis of potential repetitive attacks, an automated analysis of sensitive information, and automated analysis of specific HTTP attributes.
- Protective instructions are inserted into the program instructions. The protective instructions are utilized to detect and respond to attacks during execution of the program instructions.
- Non-Patent reference “Dynamic Taint Analysis for Automatic Detection, Analysis” by James Newsome and Dawn Song of Carnegie Mellon University, proposes a dynamic taint analysis solution for automatic detection of overwrite attacks.
- the approach does not need source code or special compilation for the monitored program, and hence works on commodity software.
- TaintCheck a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time.
- Non-Patent reference “gFuzz: An instrumented web application fuzzing environment” by Ezequiel D. Gutesman of Core Security Technologies, Argentina, introduces a fuzzing solution for PHP web applications that improves the detection accuracy and enriches the information provided in vulnerability reports. They use dynamic character-grained taint analysis and grammar-based analysis in order to analyze the anatomy of each executed SQL query and determine which resulted in successful attacks. A vulnerability report is then accompanied by the offending lines of source code and the fuzz vector (with attacker-controlled characters individualized).
- the objects and advantages of the invention are secured by a system and methods of detecting software vulnerabilities in a computer program by analyzing the compiled code of that computer program.
- the invention optionally uses the source code of the computer program in conjunction with the compiled code, but having the source code is not a requirement of the invention.
- the invention teaches utilizing an instruction model for each binary instruction of the compiled code.
- the instruction model for a given instruction includes the location, debug information, instruction type, operands, existing memory state requirements, bytecode metadata, potential security attributes, basic block membership and function/method membership if applicable of that instruction.
- the instruction model also includes placeholders for additional attributes.
- additional attributes may include information for pointer aliases or unsafe dataflow.
- the pointer alias information may include an aliasing map containing pointers that have the same address values given a subset of or all possible control flows of the instructions of the compiled code.
- a set of concurrent worker threads are spawned that take advantage of a multi-core or multi-node or multi-machine or multi-CPU processing platform, to analyze instructions where an unknown or unsafe external input (or taint) data is provided to the program and an unsafe function or method is called upon it.
- the security findings in the security report also contain a full trace of the unsafe data at the instruction that triggered the security finding, along with the line numbers of the source file if available, a human-readable description of the finding, a risk rating and optionally one or more recommendations to address the security finding.
- the instruction model For a given instruction, the instruction model includes the location, debug information, instruction type, operands, existing memory state requirements, bytecode metadata, potential security attributes, basic block membership, function/method membership if applicable and class membership of the given instruction.
- the instruction model also includes placeholders for additional attributes, including pointer aliasing information, unsafe data flow information and attributes that are deduced from other attributes including values of memory locations, values of processor registers and variable types for the given instruction.
- control flow graph is populated with all potential control flow paths, and a bidirectional list of predecessor instructions.
- data flow model is populated by running the compiled code with the instrumentation at least once and recording the flow of unsafe data for each run. In another preferred embodiment, this recording of unsafe data flow is first done in a data flow file in a common file format such as XML, and the population of the data flow model is based on the data flow file.
- each security finding in the security report includes the debug information for the instruction that triggered the finding, along with the line numbers of the source code if available, a trace of the unsafe data from its origin to termination, identifier values of any processor registers or variables containing the unsafe data, a description of the security finding, a risk rating, and optionally one or more recommendations to address or remedy the security finding. Appropriate highlighting of these elements in the security report is also performed to make the report visually presentable, readable and easy to consume.
- Concurrency locks are provided for each of the three lists, Unsafe1, Unsafe2 and Unsafe3 above, and at each step of the above processing, these locks are used to ensure the integrity of the contents of these lists. When a list is no longer being used, its concurrency lock is released (unlocked).
- FIG. 1 is a block diagram view of the software vulnerabilities detection system according to the current invention.
- FIG. 2 is a conceptual diagram of the instruction model according to the current invention.
- FIG. 3 is a diagram of the control flow graph of an instruction according to the invention.
- FIG. 4 is a conceptual diagram of the data flow model of an instruction according to the invention.
- FIG. 6 is a flowchart comprising the analytical steps of the algorithm required for the detection of software vulnerabilities according to the current invention.
- Vulnerabilities detection system 100 comprises computer program 102 in the form of its compiled code 104 and optionally source code 106 that resulted in its compiled code 104 .
- Computer program 102 is the target program to be analyzed by system 100 for software vulnerabilities. Having source code 106 is desirable but not required by software vulnerabilities detection system 100 according to the invention.
- Vulnerabilities detected by system 100 in computer program 102 may allow exploitative attacks by potential adversaries or hackers. Such attacks include, but are not limited to denial of service attacks, code injection attacks and 2 nd order attacks such as cross-site scripting (XSS) attacks.
- XSS cross-site scripting
- Software vulnerabilities detection system 100 comprises instruction model 110 , control flow graph 112 and data flow model 114 . Based on instruction model 110 , control flow graph 112 and data flow model 114 , software vulnerabilities detection system 100 performs analysis 116 to produce security report 118 comprising the security findings discovered during analysis 116 .
- compiled code 104 can be executable binary code, machine code, or object code that can run directly on a hardware platform such as x86, Sparc, Mac, HP, IBM Mainframe, etc. or it can be an intermediate bytecode or portable code that can run in a given runtime environment such as Java Virtual Machine (JVM).
- Source code 106 can be in any programing language such as C, C++, Java, Assembly, Cobol, SQL, etc.
- source code 106 can be in any 2 d , 3 rd , 4 th or higher generation programming language without departing from the principles of the invention.
- a highly advantageous aspect of the current invention is that source code 106 is desirable, but not required to achieve the objects of the invention. Not requiring the presence of source code 106 overcomes many practical limitations of the prior art.
- Instruction model 110 is a programming construct used by the invention to model each instruction of compiled code 104 .
- This programming construct comprises all the necessary and desirable attributes required by system 100 to model each instruction of compiled code 104 .
- attributes include the location (e.g. base address and relative memory location), debug information if available (e.g. variable name annotations and/or source code line annotations), type of the instruction (e.g. mov, add, sub), its operands (e.g. eax register, an integer immediate value, operand stack reference, local value reference), its potential security attributes and existing memory state requirements (e.g. basic block derived invariant conditions), basic block membership (e.g.
- FIG. 2 provides a conceptual representation of instruction model 110 using a familiar notation for data structures and member associations in computer programming.
- user input 108 may be provided by the operator or user of computer program 102 whose vulnerabilities are to be detected.
- user input 108 represents a potential security risk for computer program 102 as it may intentionally or otherwise, violate the bounds of a program variable which may affect the integrity of computer program 102 or the data it is operating on.
- user input 108 represents ‘taint’ or unsafe data, as will be understood by skilled people of the art.
- User input 108 can be provided in many different ways, for example, via a web form and keyboard, a file, an input/output buffer or stream, a pipe, screen redirect, etc.
- Compiled code 104 is preferably instrumented at random and critical control flow points of the program.
- instrumentation may refer to code instructions and metadata augmented to the computer program that allow monitoring of its behavior, performance and operation more closely than during normal execution, and may generate additional logging and debug output to the screen or files as desired.
- computer program 102 is preferably instrumented at random points within the program. Instead of or in addition to that, the program is also preferably instrumented at points where there is a critical control flow transition in the program.
- Control flow graph 112 for each instruction that complements instruction model 110 of that instruction.
- Control flow graph 110 for a given instruction of compiled code 104 is populated with all potential control flow paths of that instruction, assuming there is no overwriting of the underlying instructions.
- Control flow graph 112 for a given instruction also contains a bidirectional list of its predecessor instructions.
- FIG. 3 represents control flow graph 114 for an instruction I according to the teachings of the invention. In FIG. 3 , each instruction is represented by a circle. Instruction I has 4 predecessor instructions P and 3 successor instructions S representing all possible control flow paths for I as shown in the figure. All P instructions will be contained in a bidirectional list in control flow graph 112 for instruction I as represented by the dashed line in FIG. 3 .
- the invention further comprises data flow model 114 .
- data flow model 114 During the execution of program 102 , the movement of unsafe data is recorded in data flow model 114 . As unsafe data moves from one variable or processor register to another and from one instruction to the successor instruction, this movement is recorded in data flow model 114 according to the teachings of the invention.
- FIG. 4 represents an example data flow model 114 populated according to the teachings of the invention.
- Variable V1 contains unsafe data that may have been previously supplied by user input 108 as taught earlier. Tainted data V1 is then moved to processor register AX in the next instruction of one control flow path, and then copied to variable V2. The subsequent instruction then calls an unsafe function on variable V2 representing a potential security risk in the computer program.
- FIG. 1 represents unsafe data that may have been previously supplied by user input 108 as taught earlier.
- Tainted data V1 is then moved to processor register AX in the next instruction of one control flow path, and then copied to variable V2.
- the subsequent instruction then calls an unsafe function on variable V
- instruction model 110 further includes placeholders for additional attributes or deduced attributes that may not be immediately known at the time of the initial creation of instruction model 110 .
- additional attributes may include pointer aliases. Pointer aliases represent pointers that point to, or contain memory addresses, that remain the same for multiple control flow paths of computer program 102 .
- instruction model 110 for a given instruction I may include information related to its predecessor instructions P as represented in FIG. 3 , and any additional information or metadata as deemed necessary to facilitate recording of the flow of unsafe data as represented in FIG. 4 .
- instruction model 110 may also include information deduced from other attributes. Examples of such derived attributes include memory locations or addresses, processor registers and variable type information for the given instruction based on its type, debug information and bytecode metadata.
- security report 118 of FIG. 1 may comprise an execution trace of unsafe data corresponding to each said security finding populated in the report.
- the execution trace may contain the origin and termination information for the unsafe data that ultimately caused the security finding to be triggered. For example, if unsafe data was provided as a user input in function or instruction I1 and it traversed through several intervening functions or instructions I2 . . . I9 before being discarded or reset in instruction I10, then execution trace for the corresponding security finding in security report 118 may contain the entire lifecycle or trace of that data along with the names of functions or instructions I1 . . . I10.
- security report 118 may contain a human friendly description of the security finding, and a risk rating or risk factor assigned to the security finding by system 100 .
- Unsafe1 list 180 As the worker threads process the instructions of compiled code 104 , the contents of Unsafe1 list 180 , Unsafe2 list 182 , Unsafe3 list 184 for each instruction are updated based on control flow graph 112 of that instruction as data flows from its Unsafe1 list 180 to Unsafe2 list 182 to Unsafe3 list 184 and into Unsafe1 list 180 of the successor instruction.
- FIG. 6 shows the above algorithm in a flowchart format where an unsafe instruction denotes an instruction that calls an unsafe function on unsafe data as explained above, and the label instr is used to abbreviate the term instruction.
- concurrency locks 190 , 192 , 194 are provided for each of Unsafe1 list 180 , Unsafe2 list 182 and Unsafe3 list 184 respectively, and at each step of the above processing, these locks are used to ensure the integrity of the contents of these lists. When a list is no longer being used, its concurrency lock is released (unlocked). Those skilled in the art will understand how the contents of Unsafe1 list 180 , Unsafe2 list 182 and Unsafe3 list 184 will be updated as explained above.
- a worker thread selects an instruction to process from Worklist 160 , it locks its Unsafe2 list 182 and Unsafe3 list 184 , and also temporarily locks its Unsafe1 list 180 while it imports data from its Unsafe1 list 180 to Unsafe2 list 182 .
- the worker thread then statically analyzes the currently selected instruction to determine from its incoming unsafe data in Unsafe1 list, currently processed data in Unsafe2 list and fully processed data in Unsafe3 list, what other instructions that unsafe data may propagate to, based on the attributes of the current instruction as contained in its instruction model 110 , and any other custom unsafe data propagation rules pre-defined or provided by the user.
- worker threads are distributed across a multi-core or multi-CPU or multi-machine or multi-node processing environment to improve the performance of the analysis and to allow processing of very large target software programs.
- the traversal of the control flow graph by the worker threads is performed according to custom unsafe data propagation rules provided by the user.
- the security findings are created by an analyzer module.
- security report 118 as shown in FIG. 5 contains a full execution trace of unsafe data corresponding to each security finding 200 populated in security report 118 .
- the execution trace may contain the origin and termination information for the unsafe data that ultimately caused security finding 200 to be triggered. For example, if unsafe data was provided as a user input in function or instruction I1 and it traversed through several intervening functions or instructions I2 . . . I9 before being discarded or reset in instruction I10, then execution trace for corresponding security finding 200 in security report 118 may contain the entire lifecycle or trace of that data along with the names or labels of instructions I1 . . .
- each source file corresponding to the above trace is parsed into an abstract syntax tree or trees, and the line numbers and offsets for non-keyword identifier tokens is generated. Persons skilled in the art will understand that these non-keyword identifier tokens will represent user or custom variables, as opposed to keywords belonging to the grammar of the programming language itself.
- the abstract syntax tree or trees above corresponding to each instruction in the trace, the identifier names and values of any variables or processor registers that contained the unsafe data is obtained using the debug information and added to the trace information.
- security report 118 of FIG. 5 may be properly formatted to be visually appealing with proper highlighting of important pieces of information for each security finding 200 , and contain a human friendly description of the finding along with a risk rating or risk factor assigned to the finding by system 100 .
- vulnerabilities detection system 100 may assign a risk rating from 1 to 10, or as a percentage, or use some other suitable rating system.
- Security report 118 may also contain one or more recommendations on how to address security finding 200 , or ‘fix’ the problem. Such recommendations and risk assignments may be based on a knowledgebase (not shown) derived from subject matter expertise in detecting and correcting such software vulnerabilities. The knowledgebase may be further designed to continuously augment its content either automatically or with human assistance or by a combination of both automatic and manual means, as vulnerabilities detection system 100 operates over time.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- General Engineering & Computer Science (AREA)
- Computer Security & Cryptography (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Virology (AREA)
- Computing Systems (AREA)
- Debugging And Monitoring (AREA)
Abstract
Description
- This invention was made with government support under the CyberFastTrack program documented in DARPA PA-11-53 dated Jan. 31 2013, awarded by Defense Advanced Research Projects Agency (DARPA).
- This invention relates generally to ensuring software security and in particular to exposing software vulnerabilities by performing static and dynamic analysis of compiled software.
- Software security and vulnerability checking is an active field of academic and industrial pursuit. With the news of exploitation of software vulnerabilities by hackers a commonplace occurrence, it is unsurprising to see many academic and professional institutions focusing their efforts to develop tools and practices that aim to make software more secure against exploitative attacks from global hackers and adversaries.
- There are many ways of detecting and addressing vulnerabilities in software in the prior art. U.S. Pat. No. 8,499,353 discloses security assessment and vulnerability testing of software applications based in part on application metadata in order to determine an appropriate assurance level and associated test plan that includes multiple types of analysis. Steps from each test are combined into a “custom” or “application-specific” workflow, and the results of each test then correlated with other results to identify potential vulnerabilities.
- U.S. Pat. No. 8,365,155 describes a software analysis framework utilizing a decompilation method and system for parsing executable code, identifying and recursively modeling data flows, identifying and recursively modeling control flow and iteratively refining these models to provide a complete model at the nanocode level. The nanocode decompiler may be used to determine flaws, security vulnerabilities, or general quality issues that may exist in the code.
- U.S. Pat. No. 8,739,280 describes a context-sensitive taint analysis system. Taint processing applied to a tainted value of an application is identified and an output context of the application associated with output of the tainted value is determined. It is determined whether the taint processing is effective in mitigating a security vulnerability caused by the tainted value for the output context.
- U.S. Pat. No. 8,347,392 describes an apparatus and method for analyzing and supplementing a program to provide security. A computer readable storage medium has executable instructions to perform an automated analysis of program instructions. The automated analysis includes at least two analyses selected from an automated analysis of injection vulnerabilities, an automated analysis of potential repetitive attacks, an automated analysis of sensitive information, and automated analysis of specific HTTP attributes. Protective instructions are inserted into the program instructions. The protective instructions are utilized to detect and respond to attacks during execution of the program instructions.
- Non-Patent reference, “Dynamic Taint Analysis for Automatic Detection, Analysis” by James Newsome and Dawn Song of Carnegie Mellon University, proposes a dynamic taint analysis solution for automatic detection of overwrite attacks. The approach does not need source code or special compilation for the monitored program, and hence works on commodity software. To demonstrate this idea, they implemented TaintCheck, a mechanism that can perform dynamic taint analysis by performing binary rewriting at run time.
- Non-Patent reference, “gFuzz: An instrumented web application fuzzing environment” by Ezequiel D. Gutesman of Core Security Technologies, Argentina, introduces a fuzzing solution for PHP web applications that improves the detection accuracy and enriches the information provided in vulnerability reports. They use dynamic character-grained taint analysis and grammar-based analysis in order to analyze the anatomy of each executed SQL query and determine which resulted in successful attacks. A vulnerability report is then accompanied by the offending lines of source code and the fuzz vector (with attacker-controlled characters individualized).
- One shortcoming of prior art teachings is that they suffer from poor accuracy while also at times requiring source code for analysis as opposed to just bytecode/assembly code, or they attempt to simplify the bytecode/assembly code before analysis. Other prior art work teaches running both dynamic and static analysis components in an independent or serial fashion. Furthermore earlier approaches attempt to exhaustively map all data flows in a decompiled or intermediate representation of a software system which impairs performance and slows the overall process. Relatedly, prior art teachings do not provide for advantages afforded by concurrent multi-core or multi-CPU processing infrastructure that is commonplace these days, to allow for distributed analysis of very large target software systems with high precision.
- In view of the shortcomings of the prior art, it is an object of the present invention to provide for high-precision software analysis system and methods that do not require the source code of the analyzed program.
- It is another object of the invention to not require an exhaustive processing of all dataflows in a program but rather than the ones that include unsafe data.
- It is another object of the invention to not rely on decompliation of executable binary code.
- It is yet another object of the invention to allow for distributed processing of the analysis framework taught by the invention by taking advantage of a multi-CPU or multi-core processing environment, consequently allowing for analysis of very large target software systems with efficiency and high precision.
- The objects and advantages of the invention are secured by a system and methods of detecting software vulnerabilities in a computer program by analyzing the compiled code of that computer program. The invention optionally uses the source code of the computer program in conjunction with the compiled code, but having the source code is not a requirement of the invention. The invention teaches utilizing an instruction model for each binary instruction of the compiled code. The instruction model for a given instruction includes the location, debug information, instruction type, operands, existing memory state requirements, bytecode metadata, potential security attributes, basic block membership and function/method membership if applicable of that instruction.
- The invention further uses a control flow graph for each instruction that complements the instruction model of that instruction, and includes all potential control flow paths, and a bidirectional list of predecessor instructions of that instruction. Preferably, the compiled code is instrumented at random and critical points in the code. There is a data flow model to record the flow of unsafe data during the execution of the program. The system has the means to analyze the data flow model and create a security finding corresponding to each instruction that calls an unsafe function on unsafe data. These security findings are aggregated in a security report along with the corresponding debug information and the optional source code information for each instruction that triggered the security finding.
- In the preferred embodiment of the invention, the instruction model also includes placeholders for additional attributes. These additional attributes may include information for pointer aliases or unsafe dataflow. The pointer alias information may include an aliasing map containing pointers that have the same address values given a subset of or all possible control flows of the instructions of the compiled code.
- In another embodiment, the instruction model also contains attributes that are deduced from other attributes of the instruction model. These derived attributes may include values for memory locations, processor registers and variable types associated with the given instruction of the instruction model. In another preferred embodiment, the flow of unsafe data is recorded in a data flow file that utilizes a common file format such as XML, based on which the data flow model is at least partially populated. In an advantageous embodiment of the invention, an analyzer module is used to analyze the instruction model, control flow graph and the data flow model to detect software vulnerabilities in the compiled code.
- In a highly advantageous embodiment of the invention, a set of concurrent worker threads are spawned that take advantage of a multi-core or multi-node or multi-machine or multi-CPU processing platform, to analyze instructions where an unknown or unsafe external input (or taint) data is provided to the program and an unsafe function or method is called upon it. In another preferred embodiment of the system, the security findings in the security report also contain a full trace of the unsafe data at the instruction that triggered the security finding, along with the line numbers of the source file if available, a human-readable description of the finding, a risk rating and optionally one or more recommendations to address the security finding.
- The methods of the invention further teach the steps required to carry out the operation of the system. The invention teaches the steps required to detect software vulnerabilities of a computer program by taking as input the compiled code of the program, and optionally its source code. It then creates an instruction model and a control flow graph for each instruction in the compiled code. If further creates a data flow model to record the flow of unsafe data during the execution of the compiled code. The compiled code is instrumented at random and critical control flow points of the program.
- For a given instruction, the instruction model includes the location, debug information, instruction type, operands, existing memory state requirements, bytecode metadata, potential security attributes, basic block membership, function/method membership if applicable and class membership of the given instruction. The instruction model also includes placeholders for additional attributes, including pointer aliasing information, unsafe data flow information and attributes that are deduced from other attributes including values of memory locations, values of processor registers and variable types for the given instruction.
- For each instruction, the control flow graph is populated with all potential control flow paths, and a bidirectional list of predecessor instructions. Finally, for each instruction, the data flow model is populated by running the compiled code with the instrumentation at least once and recording the flow of unsafe data for each run. In another preferred embodiment, this recording of unsafe data flow is first done in a data flow file in a common file format such as XML, and the population of the data flow model is based on the data flow file.
- The compiled code is scanned according to the methods claimed by the invention to find each instruction where an external input is supplied to the program, denoting unknown, unsafe data. If that instruction calls an unsafe function on the unsafe data, this triggers the creation of a security finding. As the analysis is performed, all security findings are aggregated in a security report. In the preferred embodiment, each security finding in the security report includes the debug information for the instruction that triggered the finding, along with the line numbers of the source code if available, a trace of the unsafe data from its origin to termination, identifier values of any processor registers or variables containing the unsafe data, a description of the security finding, a risk rating, and optionally one or more recommendations to address or remedy the security finding. Appropriate highlighting of these elements in the security report is also performed to make the report visually presentable, readable and easy to consume.
- In another advantageous embodiment, three lists are created for each instruction. These lists are Unsafe1, Unsafe2 and Unsafe3. All instructions that are determined to be unsafe i.e. they use unsafe data by calling an unsafe function, are added to a list called Worklist. A set of concurrent worker threads are spawned, each thread selecting and processing an instruction at random from Worklist. Based on the control flow graph and data flow model earlier created, for each instruction in Worklist, Unsafe1 list is populated with incoming unsafe data at that instruction, Unsafe2 list with unsafe data currently being processed by that instruction, and Unsafe3 list with unsafe data that has been fully processed by that instruction. As the worker threads process the instructions, the contents of the three lists for each instruction are updated based on the control flow graph of that instruction as data flows from its Unsafe1 list to Unsafe2 list to Unsafe3 list and into the Unsafe1 list of the downstream instruction. If new unsafe data is added to the Unsafe1 list of an instruction that calls an unsafe function, it is re-added to the Worklist and a security finding is generated, and the above process is repeated. Ultimately, the spawning of worker threads is concluded when there are no more unsafe instructions left in Worklist, or a predetermined timeout period has elapsed during the above processing.
- Concurrency locks are provided for each of the three lists, Unsafe1, Unsafe2 and Unsafe3 above, and at each step of the above processing, these locks are used to ensure the integrity of the contents of these lists. When a list is no longer being used, its concurrency lock is released (unlocked).
- In a highly advantageous embodiment, worker threads are distributed across a multi-core or multi-processor or multi-CPU processing environment to improve the performance of the analysis and to allow processing of very large target software programs. In a similarly advantageous embodiment, the traversal of the control flow graph by the worker threads is performed according to custom unsafe data propagation rules provided by the user. In another advantageous embodiment the security findings are created by an analyzer module.
- Clearly, the system and methods of the invention find many advantageous embodiments. The details of the invention, including its preferred embodiments, are presented in the below detailed description with reference to the appended drawing figures.
-
FIG. 1 is a block diagram view of the software vulnerabilities detection system according to the current invention. -
FIG. 2 is a conceptual diagram of the instruction model according to the current invention. -
FIG. 3 is a diagram of the control flow graph of an instruction according to the invention. -
FIG. 4 is a conceptual diagram of the data flow model of an instruction according to the invention. -
FIG. 5 is a detailed block diagram view of the elements and their workings according to the current invention. -
FIG. 6 is a flowchart comprising the analytical steps of the algorithm required for the detection of software vulnerabilities according to the current invention. - The figures and the following description relate to preferred embodiments of the present invention by way of illustration only. It should be noted that from the following discussion, alternative embodiments of the structures and methods disclosed herein will be readily recognized as viable alternatives that may be employed without departing from the principles of the claimed invention.
- Reference will now be made in detail to several embodiments of the present invention(s), examples of which are illustrated in the accompanying figures. It is noted that wherever practicable, similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
- The present invention will be best understood by first reviewing the software
vulnerabilities detection system 100 according to the current invention as illustrated inFIG. 1 .Vulnerabilities detection system 100 comprisescomputer program 102 in the form of its compiledcode 104 andoptionally source code 106 that resulted in its compiledcode 104.Computer program 102 is the target program to be analyzed bysystem 100 for software vulnerabilities. Havingsource code 106 is desirable but not required by softwarevulnerabilities detection system 100 according to the invention. Vulnerabilities detected bysystem 100 incomputer program 102 may allow exploitative attacks by potential adversaries or hackers. Such attacks include, but are not limited to denial of service attacks, code injection attacks and 2nd order attacks such as cross-site scripting (XSS) attacks. - Software
vulnerabilities detection system 100 comprisesinstruction model 110,control flow graph 112 anddata flow model 114. Based oninstruction model 110,control flow graph 112 anddata flow model 114, softwarevulnerabilities detection system 100 performsanalysis 116 to producesecurity report 118 comprising the security findings discovered duringanalysis 116. - Readers with ordinary skill in the art will understand that compiled
code 104 can be executable binary code, machine code, or object code that can run directly on a hardware platform such as x86, Sparc, Mac, HP, IBM Mainframe, etc. or it can be an intermediate bytecode or portable code that can run in a given runtime environment such as Java Virtual Machine (JVM).Source code 106 can be in any programing language such as C, C++, Java, Assembly, Cobol, SQL, etc. Furthermore,source code 106 can be in any 2d, 3rd, 4th or higher generation programming language without departing from the principles of the invention. A highly advantageous aspect of the current invention is thatsource code 106 is desirable, but not required to achieve the objects of the invention. Not requiring the presence ofsource code 106 overcomes many practical limitations of the prior art. -
Instruction model 110 is a programming construct used by the invention to model each instruction of compiledcode 104. This programming construct comprises all the necessary and desirable attributes required bysystem 100 to model each instruction of compiledcode 104. These attributes include the location (e.g. base address and relative memory location), debug information if available (e.g. variable name annotations and/or source code line annotations), type of the instruction (e.g. mov, add, sub), its operands (e.g. eax register, an integer immediate value, operand stack reference, local value reference), its potential security attributes and existing memory state requirements (e.g. basic block derived invariant conditions), basic block membership (e.g. start and end references for all basic blocks encompassing an instruction), function/method membership and/or class membership of that instruction if applicable. Those with average skill in the art will find these attributes familiar from the fundamentals of software engineering and computer programming.FIG. 2 provides a conceptual representation ofinstruction model 110 using a familiar notation for data structures and member associations in computer programming. - Referring to
FIG. 1 , during the execution of compiledcode 104,user input 108 may be provided by the operator or user ofcomputer program 102 whose vulnerabilities are to be detected. Those familiar with the art will understand thatuser input 108 represents a potential security risk forcomputer program 102 as it may intentionally or otherwise, violate the bounds of a program variable which may affect the integrity ofcomputer program 102 or the data it is operating on. Thususer input 108 represents ‘taint’ or unsafe data, as will be understood by skilled people of the art.User input 108 can be provided in many different ways, for example, via a web form and keyboard, a file, an input/output buffer or stream, a pipe, screen redirect, etc. - Compiled
code 104 according to the invention is preferably instrumented at random and critical control flow points of the program. Those familiar with the art will understand that instrumentation may refer to code instructions and metadata augmented to the computer program that allow monitoring of its behavior, performance and operation more closely than during normal execution, and may generate additional logging and debug output to the screen or files as desired. As claimed by the invention,computer program 102 is preferably instrumented at random points within the program. Instead of or in addition to that, the program is also preferably instrumented at points where there is a critical control flow transition in the program. - Those familiar with the art will understand that there are many ways to determine these points where instrumentation will be provided in
computer program 102 in the preferred embodiment. For example, instructions in compiledcode 104 can be randomly selected for instrumentation. Alternatively or in addition, a pre-processor can be used to determine the critical control flow points inprogram 102 prior to its execution, and then instrumentation can be added at those points inprogram 102. Indeed, it is allowed by the invention to instrument entire or none ofcomputer program 102, without departing from the principles of the invention. The instrumentation ofprogram 102 allows observing and modification of unsafe data as it flows throughprogram 102 according to the teachings of the invention. - The invention further uses
control flow graph 112 for each instruction that complementsinstruction model 110 of that instruction.Control flow graph 110 for a given instruction of compiledcode 104 is populated with all potential control flow paths of that instruction, assuming there is no overwriting of the underlying instructions.Control flow graph 112 for a given instruction also contains a bidirectional list of its predecessor instructions.FIG. 3 representscontrol flow graph 114 for an instruction I according to the teachings of the invention. InFIG. 3 , each instruction is represented by a circle. Instruction I has 4 predecessor instructions P and 3 successor instructions S representing all possible control flow paths for I as shown in the figure. All P instructions will be contained in a bidirectional list incontrol flow graph 112 for instruction I as represented by the dashed line inFIG. 3 . - Referring back to
FIG. 1 , the invention further comprisesdata flow model 114. During the execution ofprogram 102, the movement of unsafe data is recorded indata flow model 114. As unsafe data moves from one variable or processor register to another and from one instruction to the successor instruction, this movement is recorded indata flow model 114 according to the teachings of the invention.FIG. 4 represents an exampledata flow model 114 populated according to the teachings of the invention. Variable V1 contains unsafe data that may have been previously supplied byuser input 108 as taught earlier. Tainted data V1 is then moved to processor register AX in the next instruction of one control flow path, and then copied to variable V2. The subsequent instruction then calls an unsafe function on variable V2 representing a potential security risk in the computer program.FIG. 4 also illustrates additional control flow paths indata flow model 114 where the unsafe function call is performed on the tainted data contained in variable V2. Those familiar with the art will know the various types of unsafe function calls that may result in a potential security flaw in the code that can be exploited by an adversary. For example, in C/C++ “char*strcpy(char*dest, const char*src)” function on tainted data is an unsafe function call, because it can allow a security condition called buffer overflow to happen and damage the integrity ofcomputer program 102 ofFIG. 1 , or its data, or worse allow a malicious adversary to inject harmful code or virus into the computer program. - According to the teachings of the current invention as explained above,
data flow model 114 only records the flow of unsafe data during the execution of the program, as opposed to attempting to include and record all potential data flows. This significantly reduces the performance overhead and memory requirements of softwarevulnerabilities detection system 100, allowing it to analyze large target software systems more comprehensively than possible through the teachings of prior art. This also allows the current invention to not require decompilation of compiled code, as required by some prior art teachings. - According to the main embodiment of the invention, based on
instruction model 110,control flow graph 112 anddata flow model 114, all instructions incomputer program 102 that call an unsafe function on unsafe data, trigger a security finding which is recorded insecurity report 118 as represented inFIG. 1 . Each such security finding contains debug information of the instruction that triggered the security finding, along with its source code information, if available.Security report 118 exposes the vulnerabilities incomputer program 102 that can be appropriately remediated to prevent exploitative attacks by amateur and professional adversaries according to the teachings of the invention. - As represented in
FIG. 2 ,instruction model 110 further includes placeholders for additional attributes or deduced attributes that may not be immediately known at the time of the initial creation ofinstruction model 110. These additional attributes may include pointer aliases. Pointer aliases represent pointers that point to, or contain memory addresses, that remain the same for multiple control flow paths ofcomputer program 102. In addition,instruction model 110 for a given instruction I may include information related to its predecessor instructions P as represented inFIG. 3 , and any additional information or metadata as deemed necessary to facilitate recording of the flow of unsafe data as represented inFIG. 4 . Furthermore,instruction model 110 may also include information deduced from other attributes. Examples of such derived attributes include memory locations or addresses, processor registers and variable type information for the given instruction based on its type, debug information and bytecode metadata. - According to an additional embodiment of the invention,
analysis 116 inFIG. 1 may be performed by an analyzer module. Analyzer module may be a part ofsystem 100 or may be external to it. If it is external tosystem 100, appropriate remote invocation calls or function calls or remote procedure calls (RPC) may be implemented to call the external module, as will be obvious to those skilled in the art. Indeed it is possible that the analyzer module is a 3rd party software with its own application programming interface (API), without departing from the principles of the invention. Similarly, in a highly advantageous embodiment,analysis 116 is performed by worker threads that are spawned specifically for that purpose. These worker threads may then be distributed across a cluster of computing nodes, processors or cores, in a multi-CPU or multi-core, parallel processing environment. - Further embodiments claim that
security report 118 ofFIG. 1 may comprise an execution trace of unsafe data corresponding to each said security finding populated in the report. The execution trace may contain the origin and termination information for the unsafe data that ultimately caused the security finding to be triggered. For example, if unsafe data was provided as a user input in function or instruction I1 and it traversed through several intervening functions or instructions I2 . . . I9 before being discarded or reset in instruction I10, then execution trace for the corresponding security finding insecurity report 118 may contain the entire lifecycle or trace of that data along with the names of functions or instructions I1 . . . I10. In addition,security report 118 may contain a human friendly description of the security finding, and a risk rating or risk factor assigned to the security finding bysystem 100. Depending on the severity of the vulnerability associated with each finding,vulnerabilities detection system 100 may assign a risk rating from 1 to 10, or as a percentage, or use some other suitable rating system.Security report 118 may also contain one or more recommendations on how to address the security finding, or ‘fix’ the problem. Such recommendations and risk assignments may be based on a knowledgebase (not shown) derived from subject matter expertise in detecting and correcting such software vulnerabilities. - The methods of the invention describe the steps required to operate software
vulnerabilities detection system 100 ofFIG. 1 . In the preferred embodiment,computer program 102 is executed at least once and the flow of unsafe data through the program is first recorded in adata flow file 140 as shown inFIG. 4 . Based on the contents ofdata flow file 140,data flow model 114 is populated. The format ofdata flow file 140 can be any suitable file format, such as XML, plain text, any other markup format, or a binary (or compiled) format, without departing from the principles of the invention. - In the preferred embodiment, three lists, Unsafe1, Unsafe2, Unsafe3 are created for each instruction. Persons with average skill in the art will understand that these lists can be linked lists, arrays or any other appropriate data structures of computer software without departing from the principles of the invention. Compiled
code 104 is scanned to find each instruction where an external input is supplied to the program, denoting unknown, unsafe or ‘taint’ data. If that instruction calls an unsafe function on the unsafe data, that instruction is added to another list, Worklist. Persons skilled in the art will again understand that Worklist can be a linked list, an array or any other suitable data structure.List Worklist 160,Unsafe1 list 180,Unsafe2 list 184 and Unsafe3 list 186 are shown inFIG. 5 along with the other elements of the invention as taught earlier. - Next, a set of concurrent worker threads are spawned, each thread selecting and processing an instruction at random from
Worklist 160 ofFIG. 5 . Based oninstruction model 110,control flow graph 112 anddata flow model 114, for each instruction inWorklist 160,Unsafe1 list 180 is populated with incoming unsafe data at that instruction,Unsafe2 list 182 with unsafe data currently being processed by that instruction, andUnsafe3 list 184 with unsafe data that has been fully processed by that instruction. As the worker threads process the instructions of compiledcode 104, the contents ofUnsafe1 list 180,Unsafe2 list 182,Unsafe3 list 184 for each instruction are updated based oncontrol flow graph 112 of that instruction as data flows from itsUnsafe1 list 180 toUnsafe2 list 182 toUnsafe3 list 184 and intoUnsafe1 list 180 of the successor instruction. - If new unsafe data is added to
Unsafe1 list 180 of an instruction that calls an unsafe function, a new security finding 200 is created and added tosecurity report 118 as represented inFIG. 5 , and that instruction is re-added toWorklist 160, and the above process is repeated. Ultimately, the spawning of worker threads is concluded when there are no more unsafe instructions left inWorklist 160, or a predetermined timeout period has elapsed during the above processing.FIG. 6 shows the above algorithm in a flowchart format where an unsafe instruction denotes an instruction that calls an unsafe function on unsafe data as explained above, and the label instr is used to abbreviate the term instruction. - Referring to
FIG. 5 , 190, 192, 194 are provided for each ofconcurrency locks Unsafe1 list 180,Unsafe2 list 182 andUnsafe3 list 184 respectively, and at each step of the above processing, these locks are used to ensure the integrity of the contents of these lists. When a list is no longer being used, its concurrency lock is released (unlocked). Those skilled in the art will understand how the contents ofUnsafe1 list 180,Unsafe2 list 182 andUnsafe3 list 184 will be updated as explained above. Further explained, when a worker thread selects an instruction to process fromWorklist 160, it locks itsUnsafe2 list 182 andUnsafe3 list 184, and also temporarily locks itsUnsafe1 list 180 while it imports data from itsUnsafe1 list 180 toUnsafe2 list 182. The worker thread then statically analyzes the currently selected instruction to determine from its incoming unsafe data in Unsafe1 list, currently processed data in Unsafe2 list and fully processed data in Unsafe3 list, what other instructions that unsafe data may propagate to, based on the attributes of the current instruction as contained in itsinstruction model 110, and any other custom unsafe data propagation rules pre-defined or provided by the user. - Examples of custom unsafe data propagation rules include specifying that a function or method, e.g. execSqlStatement(String query), should never receive unsafe or “taint” user input in its first and only parameter. Such a rule could be expressed as an XML file defining regular expressions to identify the specific class and method for this call, along with a numeric value identifying that the first parameter should never be tainted or uncontrolled, along with security information defining the security impact of such a condition. Another example would be a rule which identifies that the subString(Integer from) call will propagate the value of its object instance to its return value, which could be similarly expressed in an xml file, and identifying the return value. Still other examples of custom rules include source rules, which define the insertion of uncontrolled or tainted data into a program and cleanse rules which define methods that are known to control data such that the data can afterwards be considered safe in one or more ways.
- Referring back to
FIG. 5 and preceding teachings, based oncontrol flow graph 112 of the current instruction, the current worker thread aggregates all possible control flow destinations of the current instruction in a list Next Instructions (not shown). Subsequently, for each instruction in Next Instructions list, the current worker thread locks its Unsafe1 list and adds outgoing processed unsafe data contained inUnsafe3 list 184 of current instruction, to incoming unsafe data contained inUnsafe1 list 180 of the instruction selected from Next Instructions list. As explained above, if unsafe data is added to Unsafe1 list of an instruction that calls an unsafe function, a security finding 200 is added tosecurity report 118 and that instruction is re-added toWorklist 160. The above process continues until there are no more instructions left to process inWorklist 160 or a timeout period has elapsed. - In a highly advantageous embodiment, worker threads are distributed across a multi-core or multi-CPU or multi-machine or multi-node processing environment to improve the performance of the analysis and to allow processing of very large target software programs. In a similarly advantageous embodiment, the traversal of the control flow graph by the worker threads is performed according to custom unsafe data propagation rules provided by the user. In another advantageous embodiment the security findings are created by an analyzer module.
- In another advantageous embodiment,
security report 118 as shown inFIG. 5 contains a full execution trace of unsafe data corresponding to each security finding 200 populated insecurity report 118. The execution trace may contain the origin and termination information for the unsafe data that ultimately caused security finding 200 to be triggered. For example, if unsafe data was provided as a user input in function or instruction I1 and it traversed through several intervening functions or instructions I2 . . . I9 before being discarded or reset in instruction I10, then execution trace for corresponding security finding 200 insecurity report 118 may contain the entire lifecycle or trace of that data along with the names or labels of instructions I1 . . . I10 along with filename or filenames and corresponding line numbers in the source files obtained from debug information or assembly instructions orsource code 106 if available. Ifsource code 106 is available, each source file corresponding to the above trace is parsed into an abstract syntax tree or trees, and the line numbers and offsets for non-keyword identifier tokens is generated. Persons skilled in the art will understand that these non-keyword identifier tokens will represent user or custom variables, as opposed to keywords belonging to the grammar of the programming language itself. Using the abstract syntax tree or trees above, corresponding to each instruction in the trace, the identifier names and values of any variables or processor registers that contained the unsafe data is obtained using the debug information and added to the trace information. - In addition,
security report 118 ofFIG. 5 may be properly formatted to be visually appealing with proper highlighting of important pieces of information for each security finding 200, and contain a human friendly description of the finding along with a risk rating or risk factor assigned to the finding bysystem 100. Depending on the severity of the vulnerability associated with each security finding 200,vulnerabilities detection system 100 may assign a risk rating from 1 to 10, or as a percentage, or use some other suitable rating system.Security report 118 may also contain one or more recommendations on how to address security finding 200, or ‘fix’ the problem. Such recommendations and risk assignments may be based on a knowledgebase (not shown) derived from subject matter expertise in detecting and correcting such software vulnerabilities. The knowledgebase may be further designed to continuously augment its content either automatically or with human assistance or by a combination of both automatic and manual means, asvulnerabilities detection system 100 operates over time. - In view of the above teaching, a person skilled in the art will recognize that the apparatus and method of invention can be embodied in many different ways in addition to those described without departing from the principles of the invention. Therefore, the scope of the invention should be judged in view of the appended claims and their legal equivalents.
Claims (30)
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/460,636 US9454659B1 (en) | 2014-08-15 | 2014-08-15 | Software vulnerabilities detection system and methods |
| US15/014,309 US9824214B2 (en) | 2014-08-15 | 2016-02-03 | High performance software vulnerabilities detection system and methods |
| US15/057,294 US10599852B2 (en) | 2014-08-15 | 2016-03-01 | High performance software vulnerabilities detection system and methods |
| US15/251,232 US9715593B2 (en) | 2014-08-15 | 2016-08-30 | Software vulnerabilities detection system and methods |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/460,636 US9454659B1 (en) | 2014-08-15 | 2014-08-15 | Software vulnerabilities detection system and methods |
Related Child Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/014,309 Continuation-In-Part US9824214B2 (en) | 2014-08-15 | 2016-02-03 | High performance software vulnerabilities detection system and methods |
| US15/251,232 Continuation US9715593B2 (en) | 2014-08-15 | 2016-08-30 | Software vulnerabilities detection system and methods |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US9454659B1 US9454659B1 (en) | 2016-09-27 |
| US20160300063A1 true US20160300063A1 (en) | 2016-10-13 |
Family
ID=56939632
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/460,636 Active 2035-06-12 US9454659B1 (en) | 2014-08-15 | 2014-08-15 | Software vulnerabilities detection system and methods |
| US15/251,232 Active US9715593B2 (en) | 2014-08-15 | 2016-08-30 | Software vulnerabilities detection system and methods |
Family Applications After (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US15/251,232 Active US9715593B2 (en) | 2014-08-15 | 2016-08-30 | Software vulnerabilities detection system and methods |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US9454659B1 (en) |
Cited By (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9749349B1 (en) * | 2016-09-23 | 2017-08-29 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US20170344349A1 (en) * | 2016-05-25 | 2017-11-30 | Microsoft Technolgy Licensing, Llc. | Sample driven profile guided optimization with precise correlation |
| US20180046454A1 (en) * | 2014-12-17 | 2018-02-15 | Cisco Technology, Inc. | Securing secret information in source code verification and at runtime |
| WO2018101575A1 (en) * | 2016-11-29 | 2018-06-07 | 한국전력공사 | Binary code-based embedded software vulnerability analysis device and method therefor |
| WO2018127794A1 (en) * | 2017-01-04 | 2018-07-12 | Checkmarx Ltd. | Management of security vulnerabilities |
| US20190391905A1 (en) * | 2016-07-27 | 2019-12-26 | Undo Ltd. | Debugging systems |
| CN111222141A (en) * | 2019-12-31 | 2020-06-02 | 广东为辰信息科技有限公司 | A method and system for analyzing code vulnerability of automotive electronic control unit |
| CN111274134A (en) * | 2020-01-17 | 2020-06-12 | 扬州大学 | Vulnerability identification and prediction method, system, computer equipment and storage medium based on graph neural network |
| US11036866B2 (en) | 2018-10-18 | 2021-06-15 | Denso Corporation | Systems and methods for optimizing control flow graphs for functional safety using fault tree analysis |
| CN114117426A (en) * | 2021-11-16 | 2022-03-01 | 中国人民解放军国防科技大学 | WEB application vulnerability detection method and system |
| US11522901B2 (en) | 2016-09-23 | 2022-12-06 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US20240028742A1 (en) * | 2022-07-22 | 2024-01-25 | Cisco Technology, Inc. | Learned control flow monitoring and enforcement of unobserved transitions |
| US12499231B2 (en) | 2022-12-19 | 2025-12-16 | Cisco Technology, Inc. | Inline control flow monitor with enforcement |
Families Citing this family (43)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7051322B2 (en) * | 2002-12-06 | 2006-05-23 | @Stake, Inc. | Software analysis framework |
| DK3130123T3 (en) * | 2014-04-11 | 2021-10-04 | Hdiv Security S L | Detection of manipulation of applications |
| US10073975B2 (en) * | 2016-08-11 | 2018-09-11 | International Business Machines Corporation | Application integrity verification in multi-tier architectures |
| US10176074B2 (en) * | 2016-09-21 | 2019-01-08 | Lenvio Inc. | Computed call/jump target resolution via behavior computation |
| CN106933645A (en) * | 2017-01-17 | 2017-07-07 | 深圳市能信安科技股份有限公司 | A kind of Apk security risks automatic Static auditing system and method |
| KR101904911B1 (en) | 2017-10-13 | 2018-10-08 | 한국인터넷진흥원 | Method for Automatically Detecting Security Vulnerability Based on Hybrid Fuzzing, and Apparatus thereof |
| US10706156B2 (en) | 2017-10-13 | 2020-07-07 | 1230604 BC Ltd. | Security risk identification in a secure software lifecycle |
| CN107886000B (en) * | 2017-11-13 | 2019-11-22 | 华中科技大学 | A software vulnerability detection method, a hierarchical response method, and a software vulnerability detection system |
| US10678916B2 (en) * | 2018-03-20 | 2020-06-09 | Didi Research America, Llc | Malicious program detection |
| CN108595952A (en) * | 2018-03-30 | 2018-09-28 | 全球能源互联网研究院有限公司 | A kind of detection method and system of electric power mobile application software loophole |
| US11087001B2 (en) * | 2018-04-04 | 2021-08-10 | Red Hat, Inc. | Determining location of speculation denial instructions for memory access vulnerabilities |
| CN110363004B (en) * | 2018-04-10 | 2023-01-03 | 腾讯科技(深圳)有限公司 | Code vulnerability detection method, device, medium and equipment |
| US11429725B1 (en) * | 2018-04-26 | 2022-08-30 | Citicorp Credit Services, Inc. (Usa) | Automated security risk assessment systems and methods |
| CN110647457B (en) * | 2018-06-26 | 2023-03-28 | 阿里巴巴集团控股有限公司 | Data mining method, data processing method and system |
| US11093605B2 (en) * | 2018-06-28 | 2021-08-17 | Cisco Technology, Inc. | Monitoring real-time processor instruction stream execution |
| US20210081541A1 (en) * | 2018-08-20 | 2021-03-18 | Hewlett-Packard Development Company, L.P. | Vulnerability state report |
| KR101981028B1 (en) | 2018-09-28 | 2019-05-23 | 한국인터넷진흥원 | System for detecting security vulnerability based on binary, method and program thereof |
| CN111444509B (en) * | 2018-12-27 | 2024-05-14 | 北京奇虎科技有限公司 | CPU vulnerability detection method and system based on virtual machine |
| US11463443B2 (en) | 2019-09-19 | 2022-10-04 | Bank Of America Corporation | Real-time management of access controls |
| KR102110735B1 (en) | 2019-10-30 | 2020-06-08 | 한국인터넷진흥원 | Method and system for re-generating binary for vulnerability detection |
| CN111475820B (en) * | 2020-04-28 | 2023-08-01 | 张皓天 | Binary vulnerability detection method, system and storage medium based on executable program |
| US11379346B2 (en) | 2020-05-12 | 2022-07-05 | Lightrun Platform LTD | Systems and methods for debugging and application development |
| CN111767547B (en) * | 2020-06-24 | 2022-08-19 | 北京理工大学 | Software vulnerability detection method based on complex network community |
| CN111966718B (en) * | 2020-09-09 | 2024-03-15 | 支付宝(杭州)信息技术有限公司 | System and method for data dissemination tracking of application systems |
| US11610000B2 (en) | 2020-10-07 | 2023-03-21 | Bank Of America Corporation | System and method for identifying unpermitted data in source code |
| US11106801B1 (en) * | 2020-11-13 | 2021-08-31 | Accenture Global Solutions Limited | Utilizing orchestration and augmented vulnerability triage for software security testing |
| CN112597038B (en) * | 2020-12-28 | 2023-12-08 | 中国航天系统科学与工程研究院 | Software defect prediction method and system |
| US11687440B2 (en) * | 2021-02-02 | 2023-06-27 | Thales Dis Cpl Usa, Inc. | Method and device of protecting a first software application to generate a protected software application |
| US11783068B2 (en) | 2021-03-24 | 2023-10-10 | Bank Of America Corporation | System for dynamic exposure monitoring |
| CN113656280B (en) * | 2021-07-09 | 2024-04-05 | 中国科学院信息工程研究所 | Vulnerability exploitation point searching method and device based on symbol execution |
| CN113672933B (en) * | 2021-08-06 | 2023-06-20 | 中国科学院软件研究所 | A Hongmeng security vulnerability detection method and system |
| CN113761539B (en) * | 2021-08-06 | 2023-10-17 | 中国科学院软件研究所 | A Hongmeng security vulnerability defense method and system |
| CN113836023B (en) * | 2021-09-26 | 2023-06-27 | 南京大学 | Compiler security testing method based on architecture cross check |
| WO2023101574A1 (en) * | 2021-12-03 | 2023-06-08 | Limited Liability Company Solar Security | Method and system for static analysis of binary executable code |
| CN114912110B (en) * | 2022-03-21 | 2024-08-06 | 中国科学院信息工程研究所 | A Node.js code security detection method and system |
| US12086269B2 (en) | 2022-03-21 | 2024-09-10 | Bank Of America Corporation | System for deployable software vulnerability testing platform |
| CN115905023A (en) * | 2022-12-31 | 2023-04-04 | 成都易迪森科技有限公司 | Integrated test platform, test method, test terminal, storage medium and device |
| CN116032654B (en) * | 2023-02-13 | 2023-06-30 | 山东省计算中心(国家超级计算济南中心) | Firmware vulnerability detection and data security management method and system |
| CN116088863B (en) * | 2023-04-04 | 2023-09-26 | 阿里云计算有限公司 | Fault positioning method and system |
| US12190128B1 (en) | 2023-10-31 | 2025-01-07 | Affirm Logic Corporation | Methods and systems for identifying control flow patterns in software code to detect software anomalies |
| US12223061B1 (en) | 2024-02-28 | 2025-02-11 | Affirm Logic Corporation | Methods and systems for analyzing dataflow associated with software code to detect software anomalies |
| US12259805B1 (en) * | 2024-03-29 | 2025-03-25 | Affirm Logic Corporation | Methods and systems for identifying control flow patterns and dataflow constraints in software code to detect software anomalies |
| CN120105413B (en) * | 2025-05-08 | 2025-11-21 | 北京时代新威信息技术有限公司 | Security vulnerability test report generation method, system and electronic equipment |
Family Cites Families (108)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5210837A (en) | 1990-06-15 | 1993-05-11 | Digital Equipment Corporation | Methods and apparatus for transforming machine language program control into high-level language constructs by manipulating graphical program representations |
| US5790858A (en) | 1994-06-30 | 1998-08-04 | Microsoft Corporation | Method and system for selecting instrumentation points in a computer program |
| US5586328A (en) | 1994-10-21 | 1996-12-17 | Microsoft Corporation | Module dependency based incremental compiler and method |
| US5671419A (en) | 1995-06-15 | 1997-09-23 | International Business Machines Corporation | Interprocedural data-flow analysis that supports recursion while only performing one flow-sensitive analysis of each procedure |
| US5787287A (en) | 1995-12-27 | 1998-07-28 | Intel Corporation | Representation of control flow and data dependence for machine |
| US6226789B1 (en) | 1996-01-29 | 2001-05-01 | Compaq Computer Corporation | Method and apparatus for data flow analysis |
| US5854924A (en) | 1996-08-08 | 1998-12-29 | Globetrotter Software, Inc. | Static debugging tool and method |
| US5872949A (en) | 1996-11-13 | 1999-02-16 | International Business Machines Corp. | Apparatus and method for managing data flow dependencies arising from out-of-order execution, by an execution unit, of an instruction series input from an instruction source |
| EP0918281A1 (en) | 1997-03-29 | 1999-05-26 | IMEC vzw | Method and apparatus for size optimisation of storage units |
| WO1999030229A1 (en) | 1997-12-11 | 1999-06-17 | Digits Corp. | Object code analysis and remediation system and method |
| US6389587B1 (en) | 1999-02-04 | 2002-05-14 | Sun Microsystems, Inc. | User interface for developing and executing data flow programs and methods, apparatus, and articles of manufacture for optimizing the execution of data flow programs |
| US7430670B1 (en) | 1999-07-29 | 2008-09-30 | Intertrust Technologies Corp. | Software self-defense systems and methods |
| US6892303B2 (en) | 2000-01-06 | 2005-05-10 | International Business Machines Corporation | Method and system for caching virus-free file certificates |
| US6883101B1 (en) | 2000-02-08 | 2005-04-19 | Harris Corporation | System and method for assessing the security posture of a network using goal oriented fuzzy logic decision rules |
| US6829710B1 (en) | 2000-03-14 | 2004-12-07 | Microsoft Corporation | Technique for producing, through watermarking, highly tamper-resistant executable code and resulting “watermarked” code so formed |
| EP1290547B1 (en) | 2000-05-09 | 2004-01-07 | Sun Microsystems, Inc. | Transformation of objects between a computer programming language and a data representation language |
| US7577834B1 (en) | 2000-05-09 | 2009-08-18 | Sun Microsystems, Inc. | Message authentication using message gates in a distributed computing environment |
| US6981279B1 (en) | 2000-08-17 | 2005-12-27 | International Business Machines Corporation | Method and apparatus for replicating and analyzing worm programs |
| US7284274B1 (en) | 2001-01-18 | 2007-10-16 | Cigital, Inc. | System and method for identifying and eliminating vulnerabilities in computer software applications |
| US7076804B2 (en) | 2001-05-11 | 2006-07-11 | International Business Machines Corporation | Automated program resource identification and association |
| US6546493B1 (en) | 2001-11-30 | 2003-04-08 | Networks Associates Technology, Inc. | System, method and computer program product for risk assessment scanning based on detected anomalous events |
| CA2372034A1 (en) | 2002-02-14 | 2003-08-14 | Cloakware Corporation | Foiling buffer-overflow and alien-code attacks by encoding |
| US6941467B2 (en) | 2002-03-08 | 2005-09-06 | Ciphertrust, Inc. | Systems and methods for adaptive message interrogation through multiple queues |
| US7930753B2 (en) | 2002-07-01 | 2011-04-19 | First Data Corporation | Methods and systems for performing security risk assessments of internet merchant entities |
| GB0218680D0 (en) | 2002-08-10 | 2002-09-18 | Ibm | Method and system for modifying a class file to monitor data flow |
| US7051322B2 (en) | 2002-12-06 | 2006-05-23 | @Stake, Inc. | Software analysis framework |
| KR100503386B1 (en) | 2003-03-14 | 2005-07-26 | 주식회사 안철수연구소 | Method to detect malicious code patterns with due regard to control and data flow |
| EP1627303A4 (en) | 2003-04-18 | 2009-01-14 | Ounce Labs Inc | Method ans system for detecting vulnerabilities in source code |
| WO2004097661A1 (en) | 2003-05-01 | 2004-11-11 | Samsung Electronics Co., Ltd. | Authenticating method and apparatus |
| US7383583B2 (en) | 2004-03-05 | 2008-06-03 | Microsoft Corporation | Static and run-time anti-disassembly and anti-debugging |
| US8584239B2 (en) | 2004-04-01 | 2013-11-12 | Fireeye, Inc. | Virtual machine with dynamic data flow analysis |
| US8171553B2 (en) | 2004-04-01 | 2012-05-01 | Fireeye, Inc. | Heuristic based capture with replay to virtual machine |
| US7207065B2 (en) | 2004-06-04 | 2007-04-17 | Fortify Software, Inc. | Apparatus and method for developing secure software |
| US7975306B2 (en) | 2004-06-04 | 2011-07-05 | Hewlett-Packard Development Company, L.P. | Apparatus and method for monitoring secure software |
| US20050273860A1 (en) | 2004-06-04 | 2005-12-08 | Brian Chess | Apparatus and method for developing, testing and monitoring secure software |
| US20050273859A1 (en) | 2004-06-04 | 2005-12-08 | Brian Chess | Apparatus and method for testing secure software |
| US20060090206A1 (en) | 2004-10-15 | 2006-04-27 | Ladner Michael V | Method, system and apparatus for assessing vulnerability in Web services |
| US8887287B2 (en) | 2004-10-27 | 2014-11-11 | Alcatel Lucent | Method and apparatus for software integrity protection using timed executable agents |
| US7840845B2 (en) | 2005-02-18 | 2010-11-23 | Intel Corporation | Method and system for setting a breakpoint |
| US7860842B2 (en) | 2005-03-16 | 2010-12-28 | Oracle International Corporation | Mechanism to detect and analyze SQL injection threats |
| US7735136B2 (en) | 2005-04-18 | 2010-06-08 | Vmware, Inc. | 0-touch and 1-touch techniques for improving the availability of computer programs under protection without compromising security |
| US8266700B2 (en) | 2005-05-16 | 2012-09-11 | Hewlett-Packard Development Company, L. P. | Secure web application development environment |
| US20070016894A1 (en) | 2005-07-15 | 2007-01-18 | Sreedhar Vugranam C | System and method for static analysis using fault paths |
| US8239939B2 (en) | 2005-07-15 | 2012-08-07 | Microsoft Corporation | Browser protection module |
| US8347392B2 (en) | 2005-08-25 | 2013-01-01 | Hewlett-Packard Development Company, L.P. | Apparatus and method for analyzing and supplementing a program to provide security |
| US7849509B2 (en) | 2005-10-07 | 2010-12-07 | Microsoft Corporation | Detection of security vulnerabilities in computer programs |
| US8528093B1 (en) | 2006-04-12 | 2013-09-03 | Hewlett-Packard Development Company, L.P. | Apparatus and method for performing dynamic security testing using static analysis data |
| US8510827B1 (en) | 2006-05-18 | 2013-08-13 | Vmware, Inc. | Taint tracking mechanism for computer security |
| EP1870829B1 (en) | 2006-06-23 | 2014-12-03 | Microsoft Corporation | Securing software by enforcing data flow integrity |
| US7971193B2 (en) | 2006-07-14 | 2011-06-28 | Hewlett-Packard Development Company, L.P. | Methods for performining cross module context-sensitive security analysis |
| US7526681B2 (en) | 2006-08-07 | 2009-04-28 | Sap Portals Israel Ltd. | Software testing framework |
| US7788235B1 (en) | 2006-09-29 | 2010-08-31 | Symantec Corporation | Extrusion detection using taint analysis |
| US20100083240A1 (en) | 2006-10-19 | 2010-04-01 | Checkmarx Ltd | Locating security vulnerabilities in source code |
| US8266702B2 (en) | 2006-10-31 | 2012-09-11 | Microsoft Corporation | Analyzing access control configurations |
| US8380841B2 (en) | 2006-12-07 | 2013-02-19 | Microsoft Corporation | Strategies for investigating and mitigating vulnerabilities caused by the acquisition of credentials |
| US7877812B2 (en) | 2007-01-04 | 2011-01-25 | International Business Machines Corporation | Method, system and computer program product for enforcing privacy policies |
| US8499353B2 (en) | 2007-02-16 | 2013-07-30 | Veracode, Inc. | Assessment and analysis of software security flaws |
| US9069967B2 (en) | 2007-02-16 | 2015-06-30 | Veracode, Inc. | Assessment and analysis of software security flaws |
| US8613080B2 (en) | 2007-02-16 | 2013-12-17 | Veracode, Inc. | Assessment and analysis of software security flaws in virtual machines |
| US8752032B2 (en) * | 2007-02-23 | 2014-06-10 | Irdeto Canada Corporation | System and method of interlocking to protect software-mediated program and device behaviours |
| CN100461132C (en) | 2007-03-02 | 2009-02-11 | 北京邮电大学 | Software security code analyzer and detection method based on source code static analysis |
| US7933946B2 (en) | 2007-06-22 | 2011-04-26 | Microsoft Corporation | Detecting data propagation in a distributed system |
| US8381192B1 (en) * | 2007-08-03 | 2013-02-19 | Google Inc. | Software testing using taint analysis and execution path alteration |
| JP5176478B2 (en) | 2007-10-22 | 2013-04-03 | 富士通株式会社 | Data flow analysis device, data flow analysis method, and data flow analysis program |
| US7530107B1 (en) | 2007-12-19 | 2009-05-05 | International Business Machines Corporation | Systems, methods and computer program products for string analysis with security labels for vulnerability detection |
| US8321840B2 (en) | 2007-12-27 | 2012-11-27 | Intel Corporation | Software flow tracking using multiple threads |
| US8327339B2 (en) | 2008-06-30 | 2012-12-04 | Oracle America, Inc. | Method and system for fast static taint analysis |
| JP5459313B2 (en) | 2009-05-20 | 2014-04-02 | 日本電気株式会社 | Dynamic data flow tracking method, dynamic data flow tracking program, dynamic data flow tracking device |
| US8423965B2 (en) | 2009-06-23 | 2013-04-16 | Microsoft Corporation | Tracing of data flow |
| US8397300B2 (en) * | 2009-09-22 | 2013-03-12 | International Business Machines Corporation | Detecting security vulnerabilities relating to cryptographically-sensitive information carriers when testing computer software |
| US8584246B2 (en) | 2009-10-13 | 2013-11-12 | International Business Machines Corporation | Eliminating false reports of security vulnerabilities when testing computer software |
| US8407800B2 (en) | 2009-11-24 | 2013-03-26 | Honeywell International Inc. | Method for software vulnerability flow analysis, generation of vulnerability-covering code, and multi-generation of functionally-equivalent code |
| US8468605B2 (en) | 2009-11-30 | 2013-06-18 | International Business Machines Corporation | Identifying security vulnerability in computer software |
| US8402547B2 (en) | 2010-03-14 | 2013-03-19 | Virtual Forge GmbH | Apparatus and method for detecting, prioritizing and fixing security defects and compliance violations in SAP® ABAP™ code |
| US8458798B2 (en) | 2010-03-19 | 2013-06-04 | Aspect Security Inc. | Detection of vulnerabilities in computer systems |
| US20110231317A1 (en) | 2010-03-19 | 2011-09-22 | Wihem Arsac | Security sensitive data flow analysis |
| US8528095B2 (en) | 2010-06-28 | 2013-09-03 | International Business Machines Corporation | Injection context based static analysis of computer software applications |
| US8381242B2 (en) | 2010-07-20 | 2013-02-19 | International Business Machines Corporation | Static analysis for verification of software program access to secure resources for computer systems |
| US8701198B2 (en) | 2010-08-10 | 2014-04-15 | Salesforce.Com, Inc. | Performing security analysis on a software application |
| WO2012025865A1 (en) | 2010-08-24 | 2012-03-01 | Checkmarx Ltd. | Mining source code for violations of programming rules |
| US8434070B2 (en) | 2010-10-26 | 2013-04-30 | International Business Machines Corporation | Generating specifications of client-server applications for static analysis |
| US8667584B2 (en) | 2010-12-15 | 2014-03-04 | International Business Machines Corporation | Formal analysis of the quality and conformance of information flow downgraders |
| US8856764B2 (en) | 2011-01-25 | 2014-10-07 | International Business Machines Corporation | Distributed static analysis of computer software applications |
| US8627279B2 (en) * | 2011-02-07 | 2014-01-07 | International Business Machines Corporation | Distributed, non-intrusive code review in a development environment |
| US8850405B2 (en) | 2011-02-23 | 2014-09-30 | International Business Machines Corporation | Generating sound and minimal security reports based on static analysis of a program |
| US8627465B2 (en) | 2011-04-18 | 2014-01-07 | International Business Machines Corporation | Automatic inference of whitelist-based validation as part of static analysis for security |
| US8539466B2 (en) | 2011-05-23 | 2013-09-17 | International Business Machines Corporation | Determining suitable insertion points for string sanitizers in a computer code |
| US8516443B2 (en) | 2011-05-26 | 2013-08-20 | Oracle International Corporation | Context-sensitive analysis framework using value flows |
| US8949992B2 (en) | 2011-05-31 | 2015-02-03 | International Business Machines Corporation | Detecting persistent vulnerabilities in web applications |
| US9032528B2 (en) | 2011-06-28 | 2015-05-12 | International Business Machines Corporation | Black-box testing of web applications with client-side code evaluation |
| US8893102B2 (en) | 2011-07-27 | 2014-11-18 | Oracle International Corporation | Method and system for performing backward-driven path-sensitive dataflow analysis |
| US8793665B2 (en) | 2011-08-26 | 2014-07-29 | Fujitsu Limited | Performing taint analysis for javascript software using a control flow graph |
| US8671397B2 (en) | 2011-09-27 | 2014-03-11 | International Business Machines Corporation | Selective data flow analysis of bounded regions of computer software applications |
| US8739280B2 (en) | 2011-09-29 | 2014-05-27 | Hewlett-Packard Development Company, L.P. | Context-sensitive taint analysis |
| US8756587B2 (en) | 2011-09-30 | 2014-06-17 | International Business Machines Corporation | Static analysis of computer software applications |
| US9971896B2 (en) | 2011-12-30 | 2018-05-15 | International Business Machines Corporation | Targeted security testing |
| US8806464B2 (en) | 2012-04-26 | 2014-08-12 | Hewlett-Packard Development Company, L.P. | Process flow optimized directed graph traversal |
| US8844046B2 (en) | 2012-09-26 | 2014-09-23 | International Business Machines Corporation | Method and apparatus for paralleling and distributing static source code security analysis using loose synchronization |
| US9740868B2 (en) | 2012-09-27 | 2017-08-22 | International Business Machines Corporation | Customizing a security report using static analysis |
| US8973131B2 (en) | 2012-11-02 | 2015-03-03 | International Business Machines Corporation | Refinement-based security analysis |
| US20140130153A1 (en) | 2012-11-08 | 2014-05-08 | International Business Machines Corporation | Sound and effective data-flow analysis in the presence of aliasing |
| US9690945B2 (en) | 2012-11-14 | 2017-06-27 | International Business Machines Corporation | Security analysis using relational abstraction of data structures |
| US9171169B2 (en) | 2012-12-14 | 2015-10-27 | Salesforce.Com, Inc. | System and method for dynamic analysis wrapper objects for application dataflow |
| US8869287B2 (en) | 2012-12-31 | 2014-10-21 | International Business Machines Corporation | Hybrid analysis of vulnerable information flows |
| US9569334B2 (en) | 2013-03-14 | 2017-02-14 | Whitehat Security, Inc. | Techniques for traversing representations of source code |
| US9405915B2 (en) | 2013-03-14 | 2016-08-02 | Whitehat Security, Inc. | Techniques for correlating vulnerabilities across an evolving codebase |
| US9158922B2 (en) * | 2013-05-29 | 2015-10-13 | Lucent Sky Corporation | Method, system, and computer-readable medium for automatically mitigating vulnerabilities in source code |
| TWI528216B (en) * | 2014-04-30 | 2016-04-01 | 財團法人資訊工業策進會 | Method, electronic device, and user interface for on-demand detecting malware |
-
2014
- 2014-08-15 US US14/460,636 patent/US9454659B1/en active Active
-
2016
- 2016-08-30 US US15/251,232 patent/US9715593B2/en active Active
Cited By (23)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180046454A1 (en) * | 2014-12-17 | 2018-02-15 | Cisco Technology, Inc. | Securing secret information in source code verification and at runtime |
| US20170344349A1 (en) * | 2016-05-25 | 2017-11-30 | Microsoft Technolgy Licensing, Llc. | Sample driven profile guided optimization with precise correlation |
| US11003428B2 (en) * | 2016-05-25 | 2021-05-11 | Microsoft Technolgy Licensing, Llc. | Sample driven profile guided optimization with precise correlation |
| US10761966B2 (en) * | 2016-07-27 | 2020-09-01 | Undo Ltd. | Generating program analysis data for analysing the operation of a computer program |
| US20190391905A1 (en) * | 2016-07-27 | 2019-12-26 | Undo Ltd. | Debugging systems |
| US11522901B2 (en) | 2016-09-23 | 2022-12-06 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US11165811B2 (en) | 2016-09-23 | 2021-11-02 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US9749349B1 (en) * | 2016-09-23 | 2017-08-29 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US10116683B2 (en) | 2016-09-23 | 2018-10-30 | OPSWAT, Inc. | Computer security vulnerability assessment |
| US10554681B2 (en) | 2016-09-23 | 2020-02-04 | OPSWAT, Inc. | Computer security vulnerability assessment |
| KR101906004B1 (en) * | 2016-11-29 | 2018-10-10 | 한국전력공사 | Apparatus and method for analyzing embeded software vulnerability based on binary code |
| WO2018101575A1 (en) * | 2016-11-29 | 2018-06-07 | 한국전력공사 | Binary code-based embedded software vulnerability analysis device and method therefor |
| EP3566166A4 (en) * | 2017-01-04 | 2020-09-09 | Checkmarx Ltd. | Vulnerability management |
| WO2018127794A1 (en) * | 2017-01-04 | 2018-07-12 | Checkmarx Ltd. | Management of security vulnerabilities |
| US11170113B2 (en) * | 2017-01-04 | 2021-11-09 | Checkmarx Ltd. | Management of security vulnerabilities |
| US11036866B2 (en) | 2018-10-18 | 2021-06-15 | Denso Corporation | Systems and methods for optimizing control flow graphs for functional safety using fault tree analysis |
| CN111222141A (en) * | 2019-12-31 | 2020-06-02 | 广东为辰信息科技有限公司 | A method and system for analyzing code vulnerability of automotive electronic control unit |
| CN111274134A (en) * | 2020-01-17 | 2020-06-12 | 扬州大学 | Vulnerability identification and prediction method, system, computer equipment and storage medium based on graph neural network |
| CN114117426A (en) * | 2021-11-16 | 2022-03-01 | 中国人民解放军国防科技大学 | WEB application vulnerability detection method and system |
| US20240028742A1 (en) * | 2022-07-22 | 2024-01-25 | Cisco Technology, Inc. | Learned control flow monitoring and enforcement of unobserved transitions |
| US12475224B2 (en) | 2022-07-22 | 2025-11-18 | Cisco Technology, Inc. | Control flow integrity enforcement for applications running on platforms |
| US12488106B2 (en) | 2022-07-22 | 2025-12-02 | Cisco Technology, Inc. | Control flow integrity monitoring for applications running on platforms |
| US12499231B2 (en) | 2022-12-19 | 2025-12-16 | Cisco Technology, Inc. | Inline control flow monitor with enforcement |
Also Published As
| Publication number | Publication date |
|---|---|
| US9715593B2 (en) | 2017-07-25 |
| US20160371494A1 (en) | 2016-12-22 |
| US9454659B1 (en) | 2016-09-27 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9715593B2 (en) | Software vulnerabilities detection system and methods | |
| US9824214B2 (en) | High performance software vulnerabilities detection system and methods | |
| US10599852B2 (en) | High performance software vulnerabilities detection system and methods | |
| Liao et al. | SmartDagger: a bytecode-based static analysis approach for detecting cross-contract vulnerability | |
| US7849509B2 (en) | Detection of security vulnerabilities in computer programs | |
| US8930884B2 (en) | Efficient extraction of software dependencies from program code | |
| Alhuzali et al. | Chainsaw: Chained automated workflow-based exploit generation | |
| WO2011043856A1 (en) | System and method for static detection and categorization of information-flow downgraders | |
| Ernst et al. | Boolean formulas for the static identification of injection attacks in Java | |
| WO2023101574A1 (en) | Method and system for static analysis of binary executable code | |
| Muralee et al. | {ARGUS}: A Framework for Staged Static Taint Analysis of {GitHub} Workflows and Actions | |
| Brumley et al. | Theory and techniques for automatic generation of vulnerability-based signatures | |
| Ali et al. | Unbundle-Rewrite-Rebundle: Runtime Detection and Rewriting of Privacy-Harming Code in JavaScript Bundles | |
| Wang et al. | Offdtan: A new approach of offline dynamic taint analysis for binaries | |
| Pérez et al. | Lapse+ static analysis security software: Vulnerabilities detection in java ee applications | |
| Lathar et al. | Stacy-static code analysis for enhanced vulnerability detection | |
| Di Stasio | Evaluation of static security analysis tools on open source distributed applications | |
| Chahar et al. | Code analysis for software and system security using open source tools | |
| Mostafa et al. | Netdroid: Summarizing network behavior of android apps for network code maintenance | |
| US10002253B2 (en) | Execution of test inputs with applications in computer security assessment | |
| Quinlan et al. | Source code and binary analysis of software defects | |
| Li et al. | DepTaint: a static taint analysis method based on program dependence | |
| Goichon et al. | Static vulnerability detection in Java service-oriented components | |
| Bhardwaj et al. | Fuzz testing in stack-based buffer overflow | |
| US12050687B1 (en) | Systems and methods for malware detection in portable executable files |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SECURISEA, INC., GEORGIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DAYMONT, JOSHUA M.;REEL/FRAME:033980/0949 Effective date: 20140920 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: SURCHARGE FOR LATE PAYMENT, SMALL ENTITY (ORIGINAL EVENT CODE: M2554); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |