US20140237295A1 - System and method for automating testing of computers - Google Patents
System and method for automating testing of computers Download PDFInfo
- Publication number
- US20140237295A1 US20140237295A1 US14/261,788 US201414261788A US2014237295A1 US 20140237295 A1 US20140237295 A1 US 20140237295A1 US 201414261788 A US201414261788 A US 201414261788A US 2014237295 A1 US2014237295 A1 US 2014237295A1
- Authority
- US
- United States
- Prior art keywords
- file
- under test
- cmd
- application under
- test
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Definitions
- the present invention is directed to a system and method for automating the testing of computer systems, and in one embodiment to a system and method for utilizing “typed-data” (e.g., XML-based data) to automate the testing of application use cases against an application under test.
- typed-data e.g., XML-based data
- SCCS/VCS source code control system
- CI continuous integration
- the testing of the resulting one or more programs can be a major part of the time required for a development cycle. Often quality assurance (QA) testers run the programs through a series of tests to see if the programs perform as expected. However, manually testing each scenario of a test suite can be both error prone and time consuming. Furthermore, the recording and display of manual tests can also be both error prone and time consuming.
- QA quality assurance
- FIG. 1 is block diagram of a testing system in which a verification application can perform a series of test on an application under test and receive a series of results from the application under test to determine if the application under test is operating as expected;
- FIG. 2 is an exemplary script for controlling how the verification application performs a series of steps on the application under test
- FIG. 3 is an exemplary table format (e.g., as used in a spreadsheet) that is processed and whose values are placed into a template (such as a template of FIG. 4 ) to build an action file to control how the verification application performs a series of steps on the application under test;
- a template such as a template of FIG. 4
- FIG. 4 is an exemplary template for creating an action file to control how the verification application performs a series of steps on the application under test by assigning values from a row of a test data table (such as in FIG. 3 );
- FIG. 5 is a block diagram of a schema (containing two table definitions) for storing script generation information
- FIG. 6 is a block diagram of a first table having the structure of a portion of the schema of FIG. 5 for storing action file generation information
- FIG. 7 is a block diagram of a second table having the structure of a portion of the schema of FIG. 5 for storing action file generation information
- FIG. 8 is a flowchart showing an automated testing procedure integrated with or interacting with a source code control system.
- an application under test (AUT) 105 has been produced (e.g., compiled/linked or written for an interpretive environment), and it may be run in a test mode that receives a series of test scenarios and produces a set of test results.
- the testing involves the creation of and response to a number of testing events.
- the testing events are based on “typed-data” (i.e., data having known types that are associated with the data itself, e.g., XML-based data) such that a number of parameters can be set for each event and a number of result parameters can be checked for each result.
- the events can be sent and received using a number of messaging protocols (e.g., TCP/IP, 29 West and/or Financial Information Exchange).
- a trading-based financial testing application acting as a verification application 100 can be built with testing procedures for verifying that a trading system (acting as an application under test (AUT) 105 ) is working properly.
- the verification application 100 is designed to communicate with the AUT 105 .
- the verification application 100 is configured with information which allows it to identify the AUT 105 for communication.
- Such configuration may be in the form of a configuration file including information about what machine/server the AUT 105 is running on and/or what machine/server to connect to in the event of a failure of the main machine/server.
- the configuration file may also optionally include information on what communications port to connect to on the specified machine/server.
- An exemplary configuration file is shown below:
- the verification application 100 can request from the AUT 105 a set of configuration information about the AUT 105 . Such information can be used by the verification application 100 to determine what kinds of tests can be performed on the AUT 105 and whether requested tests are compatible with the AUT 105 .
- the AUT 105 can provide to the verification application 100 a list of commands supported by the AUT 105 and/or the calling conventions/parameters for those lists of commands. By receiving that information at run-time, the verification application 100 can determine if a command in a script file should in fact be sent to the AUT 105 .
- FIG. 2 illustrates an exemplary script ( 110 in FIG. 1 ) that can be read by the verification application 100 .
- the end data item tag can be implicitly defined by including a ‘/’ character just before an ending ‘>’ for the tag.
- the various data items can include step numbers or identifiers to enable various portions of the script to be performed in a specified order.
- the step number assigned to a step is included as part of the tag information.
- the exemplary script of FIG. 2 is parsed by the verification application 100 and performed in the order specified in the script.
- the AUT 105 In the example of FIG.
- the verification application 100 fills out an internal “event” data structure that can be read, and a number of conditions are verified as specified by “VerifyEvent” structures. For example, the verification application 100 can verify that the specified price in the event data structure for the transaction matches the price specified in the cmd data structure. Similarly, the verification application 100 can verify that the specified quantity and productID of the transaction matches the quantity and productID, respectively, specified in the cmd data structure.
- the verification application 100 can print information depending on whether the verification was successful or unsuccessful. When successful, the verification application 100 prints the state information specified in the “onsuccessprint” key/value pair. When unsuccessful, the verification application 100 prints the state information specified in the “onfailprint” key/value pair.
- the verification application 100 can further combine the results of a hierarchy of events such that a quality assurance tester can quickly see which event failed and then “drill down” to the cause by looking at greater and greater detail of the event. For example, the printed test results can show that the Step 1.0 failed and then the printed information can show which of the sub-steps caused the failure (e.g., step 2 failed while steps 1 and 3 were successful).
- Step 1.0 was a sub-step in a series of other events, then the results of step 1.0 would themselves be part of the results for the step(s) that called them.
- a display technique such as a TreeView component or a tree-based Dynamic HTML (DHTML) document can be used such that error-free portions of the results are shown in a collapsed view and at least a portion of the results with errors are shown in an expanded view.
- a display technique such as a TreeView component or a tree-based Dynamic HTML (DHTML) document can be used such that error-free portions of the results are shown in a collapsed view and at least a portion of the results with errors are shown in an expanded view.
- DHTML Dynamic HTML
- the verification application 100 can utilize script re-use by processing “action files” which incorporate one or more script files and optionally redefine values specified within the referenced script(s).
- an action script e.g., named Action_test.xml
- the verification application 100 is commanded to execute the script of FIG. 2 , but the cmd.MID, cmd.UserID, cmd.Side, cmd.Qty and cmd.Price variables are overridden with the values passed within the ExecuteScriptAction command.
- the cmd.UserID value of ‘10’ in the script is overridden with the value of ‘2’ instead, and the cmd.Qty and cmd.Price values are set to ‘10’ and ‘200’, respectively, instead of to ‘100’ and ‘205, respectively, as defined in the script.
- a series of test data can be used to create an action file and or scripts from one or more templates.
- the verification application 100 can read a data table (such as shown in FIG. 3 ) to automatically create an action file.
- FIG. 3 shows a partial data table (with other data columns not being shown for the sake of clarity) that contains a row per command to be built and added to an action file.
- a series of test cases are provided in a table (with the first row being the data types if they are not already known from the order of the table).
- Each non-header row of the table is parsed, and a corresponding script template is determined for each row by determining the script name in the “name” column.
- the verification application 100 then reads in the script template identified in the row, and the variables in the templates are assigned their values from the table on a row-by-row basis.
- the TC_Quote_Edits — 1 test case instructs the verification application 100 to read in the “Quote_Edits” template (shown in FIG.
- Exemplary data fields in the cmd data structure that can be set by the scripts to create exemplary financial transactions include, but are not limited to: cmd.TxnType, cmd.MID, cmd.Action, cmd.Cache, cmd.step, cmd.ProductID, cmd.UserID, cmd.OrdType, cmd.Side, cmd.Price, cmd.StopPx, cmd.Qty, cmd.ReserveQty, cmd.MaxFloor, cmd.TIF, cmd.Execlnst, cmd.UserRole, cmd.DirectedUserID, cmd.State, and cmd.QuoteReEntry.
- exemplary data fields that can be read to test results returned in a “wait” event include, but are not limited to, validate.TxnType, validate.MID, and validate
- exemplary tests that can be performed on the AUT 105 include, but are not limited to: verifying that an error condition is signaled when a Non-existing ProductID is entered; verifying that an error condition is signaled when a non-numeric ProductID is entered; verifying that an error condition is signaled when a negative numeric ProductID is entered; verifying that an error condition is signaled when a null ProductID is sent; verifying that an error condition is signaled when a Non-existing UserID is entered; verifying that an error condition is signaled when a customer (non-market maker) UserID is entered to create a quote; verifying that an error condition is signaled when a non-numeric UserID is entered; verifying that an error condition is signaled when a negative numeric UserID is entered; verifying that an error condition is signaled when a null
- not all rows need process the same script template.
- the first row of the table indicates that the “Reset_Server” template is to be added to the resulting action file before any of the Quote_Edits template is processed with any of the other rows. This enable the AUT 105 to be placed into a known state before any testing begins.
- commands can add to the “cache” of command and/or event information by performing certain commands.
- the cache that is to be operated on is specified in the field cmd.Cache of the cmd data structure created by the verification application 100 .
- the Quote cache is one of the possible caches that can be used by the verification application 100 .
- additional caches can likewise be used.
- a financial AUT 105 may further include one or more caches for storing order information and may utilize multi-level tests (e.g., using parent/child relationships).
- the verification application 100 may further request that information in the caches be updated on behalf of the verification application 100 .
- the conditions of an existing order in the “Order” cache are replaced using a “Replace” action.
- the verification application 100 may request from the AUT 105 a schema that describes the types of commands that can be sent to the AUT 105 in order to allow the verification application 100 to test a portion of a script before actually sending the resulting command to the AUT 105 .
- Exemplary schema for the Order cache and the Quote cache are shown below in the attached appendix which forms a portion of this specification.
- each row of data to be applied to a template can be created from a series of records (e.g., of a database) by “joining” them with a common identifier (e.g., a row number).
- a ScriptName table can be created that stores rows of test names, script names and the order that the scripts are supposed to be run in as part of the script to be created.
- a KeyValuePair table can also be created that stores the parameters that are to be used with each test when creating a script from a template.
- each of the rows of the ScriptName table can be read, in order. Then for each row in the ScriptName table a set of parameters can be loaded from the KeyValuePair table to build the parameters to be used in the corresponding template.
- the data from FIGS. 6 and 7 can be used (with additional key/value pairs) to generate test cases as shown in FIG. 3 which can then be processed as described above with respect to FIGS. 3 and 4 .
- the verification application 100 is integrated with or works in conjunction with a source code control system (SCCS).
- SCCS source code control system
- the verification application 100 when source code has been checked into the SCCS and once the AUT 105 has been built (compiled, linked, programming scripts generated, etc.) and launched, the verification application 100 is automatically launched so that an action script or a specified set of action scripts is executed by the verification application 100 .
- the results of the action script(s) are then analyzed for errors (e.g., by looking for specific tags in the typed-data which represents the results).
- the verification application 100 can send an error report (e.g., via email, text message, or a post to a wiki) to the quality assurance contact (e.g., specified in the “make” file) and/or to the person that most recently checked back in the code.
- an error report e.g., via email, text message, or a post to a wiki
- the quality assurance contact e.g., specified in the “make” file
- the above method and system can be implemented on a digital computer by causing a processor (e.g., a microprocessor (such as 80 ⁇ 86 compatible processor, a 680 ⁇ 0 compatible processor or a RISC processor), a system on a chip, or a custom processor) of the digital computer to execute computer code instructions contained in a digital computer memory (e.g., volatile memory such as RAM or non-volatile memory such as flash memory or ferroelectric memory) of the digital computer.
- a processor e.g., a microprocessor (such as 80 ⁇ 86 compatible processor, a 680 ⁇ 0 compatible processor or a RISC processor), a system on a chip, or a custom processor
- a digital computer memory e.g., volatile memory such as RAM or non-volatile memory such as flash memory or ferroelectric memory
- the computer code instructions typically are read from a non-volatile storage device (e.g., an optical drive (such as a CD-ROM drive, DVD-drive or a BluRay Drive), a magnetic drive (such as a magnetic hard drive) or a semiconductor-based drive (such as a solid state drive)) into the digital computer memory (e.g., under the control of an operating system) before execution of the computer code instructions.
- a non-volatile storage device e.g., an optical drive (such as a CD-ROM drive, DVD-drive or a BluRay Drive), a magnetic drive (such as a magnetic hard drive) or a semiconductor-based drive (such as a solid state drive)
- a magnetic drive such as a magnetic hard drive
- semiconductor-based drive such as a solid state drive
- the events and commands described herein can be sent and received using a number of messaging protocols (e.g., TCP/IP, 29 West and/or Financial Information Exchange).
- TCP/IP Transmission Control Protocol/IP
- 29 West and/or Financial Information Exchange e.g., TCP/IP, 29 West and/or Financial Information Exchange
- the system can utilize any computer-based communications adapter (e.g., adapters utilizing Ethernet, WiFi (including, but not limited to the 802.11 family of protocols), WiMax, and cellular-based packet communication).
- any computer-based communications adapter e.g., adapters utilizing Ethernet, WiFi (including, but not limited to the 802.11 family of protocols), WiMax, and cellular-based packet communication).
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Stored Programmes (AREA)
Abstract
An application under test may be run in a test mode that receives a series of test scenarios and produces a set of test results under the control of a verification application. The verification application utilizes “typed-data” (i.e., data having known types that are associated with the data itself, e.g., XML-based data) such that a number of parameters can be set for each event and a number of result parameters can be checked for each result in at least one script. A series of scripts can be combined into an action file that may invoke scripts and override parameters within the invoked scripts. The events can be sent and received using a number of messaging protocols and communications adapters.
Description
- The present invention is directed to a system and method for automating the testing of computer systems, and in one embodiment to a system and method for utilizing “typed-data” (e.g., XML-based data) to automate the testing of application use cases against an application under test.
- As the complexity of software increases, and as the number of programmers and/or developers increases, software projects begin to require more control over the development cycle. One development in multi-programmer software is the use of a source code control system or a version control system (SCCS/VCS), including a number of open source SCCS/VCS systems. SCCS/VCS systems enable computer programming files (e.g., source code control files, configuration files) to be put through a development cycle which typically involves checking out one or more programming files, assembled the checked out programming files into one or more programs (e.g., by compiling and linking the programming files), and checking back in the programming files along with documentation on what changes and/or additions were made. Testing the resulting one or more programs can also be added to the development cycle, with the testing occurring before or after the check in process. SCCS/VCS systems can further be integrated into continuous integration (CI) tools, such as Hudson and Jenkins.
- The testing of the resulting one or more programs can be a major part of the time required for a development cycle. Often quality assurance (QA) testers run the programs through a series of tests to see if the programs perform as expected. However, manually testing each scenario of a test suite can be both error prone and time consuming. Furthermore, the recording and display of manual tests can also be both error prone and time consuming.
- The following description, given with respect to the attached drawings, may be better understood with reference to the non-limiting examples of the drawings, wherein:
-
FIG. 1 is block diagram of a testing system in which a verification application can perform a series of test on an application under test and receive a series of results from the application under test to determine if the application under test is operating as expected; -
FIG. 2 is an exemplary script for controlling how the verification application performs a series of steps on the application under test; -
FIG. 3 is an exemplary table format (e.g., as used in a spreadsheet) that is processed and whose values are placed into a template (such as a template ofFIG. 4 ) to build an action file to control how the verification application performs a series of steps on the application under test; -
FIG. 4 is an exemplary template for creating an action file to control how the verification application performs a series of steps on the application under test by assigning values from a row of a test data table (such as inFIG. 3 ); -
FIG. 5 is a block diagram of a schema (containing two table definitions) for storing script generation information; -
FIG. 6 is a block diagram of a first table having the structure of a portion of the schema ofFIG. 5 for storing action file generation information; -
FIG. 7 is a block diagram of a second table having the structure of a portion of the schema ofFIG. 5 for storing action file generation information; and -
FIG. 8 is a flowchart showing an automated testing procedure integrated with or interacting with a source code control system. - Turning to
FIG. 1 , an application under test (AUT) 105 has been produced (e.g., compiled/linked or written for an interpretive environment), and it may be run in a test mode that receives a series of test scenarios and produces a set of test results. The testing involves the creation of and response to a number of testing events. In a preferred embodiment, the testing events are based on “typed-data” (i.e., data having known types that are associated with the data itself, e.g., XML-based data) such that a number of parameters can be set for each event and a number of result parameters can be checked for each result. The events can be sent and received using a number of messaging protocols (e.g., TCP/IP, 29 West and/or Financial Information Exchange). - In an exemplary system, a trading-based financial testing application acting as a
verification application 100 can be built with testing procedures for verifying that a trading system (acting as an application under test (AUT) 105) is working properly. For example, as shown inFIG. 1 , theverification application 100 is designed to communicate with the AUT 105. In one embodiment, theverification application 100 is configured with information which allows it to identify the AUT 105 for communication. Such configuration may be in the form of a configuration file including information about what machine/server the AUT 105 is running on and/or what machine/server to connect to in the event of a failure of the main machine/server. The configuration file may also optionally include information on what communications port to connect to on the specified machine/server. An exemplary configuration file is shown below: -
<CONFIGURATION> <MAIN_OUTPUT_FILE>COUT</MAIN_OUTPUT_FILE> <RESULT_OUTPUT_FILE>./out.xml</RESULT_OUTPUT_FILE> <EXITONEXCEPTION>False</EXITONEXCEPTION> <CONFIGURATION> <TYPE>CORE</TYPE> <NAME>SESSION</NAME> <DIRECTORY>./lib</DIRECTORY> <LIBNAME>libServer.so</LIBNAME> <SESSIONNAME>ITXSession</SESSIONNAME> <NODENAME>miaxftldev01</NODENAME> <CONFIGURATION> <TYPE>SESSION</TYPE> <NAME>miaxftldev01</NAME> <HOSTNAME>miaxftldev01</HOSTNAME> <PORT>1291</PORT> </CONFIGURATION> </CONFIGURATION> </CONFIGURATION> - After the
verification application 100 has connected to the AUT 105, the verification application can request from the AUT 105 a set of configuration information about theAUT 105. Such information can be used by theverification application 100 to determine what kinds of tests can be performed on the AUT 105 and whether requested tests are compatible with the AUT 105. For example, the AUT 105 can provide to the verification application 100 a list of commands supported by the AUT 105 and/or the calling conventions/parameters for those lists of commands. By receiving that information at run-time, theverification application 100 can determine if a command in a script file should in fact be sent to the AUT 105. -
FIG. 2 illustrates an exemplary script (110 inFIG. 1 ) that can be read by theverification application 100. The script is formatted as a set of XML-based data that includes typed-data elements that are identified by matched pairs of tags. For example, the line <Script name=“Create_Quote” step=‘Step 0’> is matched with its ending tag “</Script>” and everything within the matched tags are considered as part of the script. Alternatively, as shown with respect to the “SendRequest”, when all the elements of a data item are contained within a single block, the end data item tag can be implicitly defined by including a ‘/’ character just before an ending ‘>’ for the tag. - As can further be seen, the various data items can include step numbers or identifiers to enable various portions of the script to be performed in a specified order. The step number assigned to a step is included as part of the tag information.
- The exemplary script of
FIG. 2 is parsed by theverification application 100 and performed in the order specified in the script. The illustrated portion of the script is given a human-readable name (name=Create_Quote”) with two main steps: sending a request (using a SendRequest command) and waiting for a series of events (using a WaitForEvent command having three VerifyEvent sub-commands). - When assembling the SendRequest command, the
verification application 100 parses each of the various fields of the SendRequest command by reading the corresponding key/value pairs of the form “key=value”. Theverification application 100 then builds an internal data structure to hold the information specified in the key/value pairs, and transmits the data structure to theAUT 105. In the example ofFIG. 2 , the “Create_Quote” script is configured (if not overridden) to create a command (named “cmd”) that is a named data structure that performs a “Create” Action for a Bid (cmd.Side=‘Bid’) for 200 shares (cmd.Qty=‘200’) of a first financial product (cmd.ProductID=‘110625995’) at a specified price (cmd.Price=‘205’) using a specified type of order (cmd.OrdType=‘Limit’) on behalf of a particular user (cmd.UserID=‘10’). The data structure may also include a Message ID (cmd.MID=‘QATest1’) and an indication (cmd.Cache=‘Quote’) of what location (or cache) the command is to be handled by. - The script then waits for a resulting event (using the WaitForEvent structure) from the specified command location (cache=‘Quote’) that is the same command location that was specified in the creation of the bid. Upon receipt of the specified event or events, the
verification application 100 fills out an internal “event” data structure that can be read, and a number of conditions are verified as specified by “VerifyEvent” structures. For example, theverification application 100 can verify that the specified price in the event data structure for the transaction matches the price specified in the cmd data structure. Similarly, theverification application 100 can verify that the specified quantity and productID of the transaction matches the quantity and productID, respectively, specified in the cmd data structure. For each of the verifications, theverification application 100 can print information depending on whether the verification was successful or unsuccessful. When successful, theverification application 100 prints the state information specified in the “onsuccessprint” key/value pair. When unsuccessful, theverification application 100 prints the state information specified in the “onfailprint” key/value pair. Theverification application 100 can further combine the results of a hierarchy of events such that a quality assurance tester can quickly see which event failed and then “drill down” to the cause by looking at greater and greater detail of the event. For example, the printed test results can show that the Step 1.0 failed and then the printed information can show which of the sub-steps caused the failure (e.g.,step 2 failed while 1 and 3 were successful). If Step 1.0 was a sub-step in a series of other events, then the results of step 1.0 would themselves be part of the results for the step(s) that called them. In order to better enable this hierarchical review of results, a display technique such as a TreeView component or a tree-based Dynamic HTML (DHTML) document can be used such that error-free portions of the results are shown in a collapsed view and at least a portion of the results with errors are shown in an expanded view.steps - The
verification application 100 can utilize script re-use by processing “action files” which incorporate one or more script files and optionally redefine values specified within the referenced script(s). As shown below, an action script (e.g., named Action_test.xml) includes at least one “ExecuteActionScript” command that specifies that script to execute and its parameters. -
<VerifyScript> <Use Case ID=′Test - Use Case 1′Name=′Market Maker sends in a Quote′ Description=′A Market Maker is able to send in a quote′ / > <TestCase Component=′Matching Engine′ Sub-Component=′Quote Behavior′ Flow=′Quotes′ ID=′1′ Name=′A Market Maker is able to send in a Quote′ Description=′Verify that a Market Maker is able to send in a quote and that it is successfully created in the system′ Author=′InventorName′ Create_Date=′04/04/2011′ Version=′1.0′ / > <ExecuteScriptAction description=′Create book′ name =′Create_Quote′ cmd.MID=′QATest1′ cmd.UserID=′2′ cmd.Side=′Bid′ cmd.Qty=′10′ cmd.Price=′200′ / > </VerifyScript> - In the “ExecuteScriptAction” command, the
verification application 100 is commanded to execute the script ofFIG. 2 , but the cmd.MID, cmd.UserID, cmd.Side, cmd.Qty and cmd.Price variables are overridden with the values passed within the ExecuteScriptAction command. For example, the cmd.UserID value of ‘10’ in the script is overridden with the value of ‘2’ instead, and the cmd.Qty and cmd.Price values are set to ‘10’ and ‘200’, respectively, instead of to ‘100’ and ‘205, respectively, as defined in the script. - Even with the use of action files and scripts, the manual creation of enough test cases to test the whole system, however, can be time consuming and tedious, especially where a series of similar events all need to be created to test an
AUT 105. Instead of directly reading an action file and/or a script and processing the action file/script as described above, a series of test data can be used to create an action file and or scripts from one or more templates. Theverification application 100 can read a data table (such as shown inFIG. 3 ) to automatically create an action file.FIG. 3 shows a partial data table (with other data columns not being shown for the sake of clarity) that contains a row per command to be built and added to an action file. Generally, a series of test cases are provided in a table (with the first row being the data types if they are not already known from the order of the table). Each non-header row of the table is parsed, and a corresponding script template is determined for each row by determining the script name in the “name” column. Theverification application 100 then reads in the script template identified in the row, and the variables in the templates are assigned their values from the table on a row-by-row basis. For example, theTC_Quote_Edits —1 test case instructs theverification application 100 to read in the “Quote_Edits” template (shown inFIG. 4 ) and assign the cmd.TxnType variable to have a value of “Txn” (as was discussed above with respect toFIG. 2 ). Similarly, the cmd.MID variable is assigned the value of “Create”. All the remaining variables are assigned the values in the corresponding row if a value for that variable is specified. If no value is specified, then the resulting script has the default value specified in the template itself. - As shown in the description column, a user can add a description to specify what condition or feature the row is designed to test. Exemplary data fields in the cmd data structure that can be set by the scripts to create exemplary financial transactions include, but are not limited to: cmd.TxnType, cmd.MID, cmd.Action, cmd.Cache, cmd.step, cmd.ProductID, cmd.UserID, cmd.OrdType, cmd.Side, cmd.Price, cmd.StopPx, cmd.Qty, cmd.ReserveQty, cmd.MaxFloor, cmd.TIF, cmd.Execlnst, cmd.UserRole, cmd.DirectedUserID, cmd.State, and cmd.QuoteReEntry. Similarly, exemplary data fields that can be read to test results returned in a “wait” event include, but are not limited to, validate.TxnType, validate.MID, and validate.Text.
- Using the data table (e.g., as shown in
FIG. 3 ) and script templates (e.g., as shown inFIG. 4 ) exemplary tests that can be performed on the AUT 105 include, but are not limited to: verifying that an error condition is signaled when a Non-existing ProductID is entered; verifying that an error condition is signaled when a non-numeric ProductID is entered; verifying that an error condition is signaled when a negative numeric ProductID is entered; verifying that an error condition is signaled when a null ProductID is sent; verifying that an error condition is signaled when a Non-existing UserID is entered; verifying that an error condition is signaled when a customer (non-market maker) UserID is entered to create a quote; verifying that an error condition is signaled when a non-numeric UserID is entered; verifying that an error condition is signaled when a negative numeric UserID is entered; verifying that an error condition is signaled when a null UserID is sent; verifying that an error condition is signaled when order type is a value other than market or limit; verifying that an error condition is signaled when order type is Market and has a Price; verifying that an error condition is signaled when order type is Limit and has no Price; verifying that an error condition is signaled when order type is Market and has zero quantity; verifying that an error condition is signaled when order type is Limit and has no quantity; verifying that an error condition is signaled when order type is Limit and has zero quantity; verifying that an error condition is signaled when a null order type is sent; verifying that an error condition is signaled when side is a value other than Bid or Offer; verifying that an error condition is signaled when a null side is sent; verifying that an error condition is signaled when price is zero; verifying that an error condition is signaled when price is negative; verifying that an error condition is signaled when price is non-numeric; verifying that an error condition is signaled when price has a decimal; verifying that an error condition is signaled when a null price is sent; verifying that an error condition is signaled when quantity is zero; verifying that an error condition is signaled when quantity is negative; verifying that an error condition is signaled when quantity is non-numeric; verifying that an error condition is signaled when quantity has a comma; verifying that an error condition is signaled when quantity is not a whole number; verifying that an error condition is signaled when quantity is greater than the value set in cmd.Qty; verifying that an error condition is signaled when a null quantity is sent; verifying that an error condition is signaled when time in force has a value other than Day; verifying that an error condition is signaled when a null time in force is sent; verifying that an error condition is signaled when an “unsigned long” value of (long+1) for Price is sent; verifying that an error condition is signaled when an unsigned long value of (long+1) for ProductID is sent; verifying that an error condition is signaled when an unsigned long value of (long+1) for quantity is sent; verifying that an error condition is signaled when an unsigned long value of (long+1) for UserID is sent; verifying that an error condition is signaled when a side of Bid is sent in lowercase; verifying that an error condition is signaled when a side of Offer is sent in lowercase; verifying that an error condition is signaled when an order type of Market is sent in lowercase; verifying that an error condition is signaled when an order type of Limit is sent in lowercase; and verifying that an error condition is signaled when a time in force of Day is sent in lowercase. - As also shown in
FIG. 3 , not all rows need process the same script template. The first row of the table indicates that the “Reset_Server” template is to be added to the resulting action file before any of the Quote_Edits template is processed with any of the other rows. This enable theAUT 105 to be placed into a known state before any testing begins. - As discussed above, commands can add to the “cache” of command and/or event information by performing certain commands. The cache that is to be operated on is specified in the field cmd.Cache of the cmd data structure created by the
verification application 100. As shown inFIG. 3 , the Quote cache is one of the possible caches that can be used by theverification application 100. However, additional caches can likewise be used. As shown in the partial script below, afinancial AUT 105 may further include one or more caches for storing order information and may utilize multi-level tests (e.g., using parent/child relationships). -
<VerifyScript> <Script name=″Limit_Buy_Order″ step=″Step 0″> <SendRequest cmd.TxnType=″Txn″ cmd.MID=″CreateOrderTest1″ cmd.Cache=″Order″ cmd.Action=″Create″ cmd.Owner=″3016″ cmd.Account=″QAACCT″ cmd.UserID=″21″ cmd.ClOrdID=″QA1″ cmd.OrdType=″Limit″ cmd.OrderQty=″700″ cmd.Price=″200″ cmd.TimeInForce=″Day″ cmd.ProductID=″110625995″ cmd.ExDestination=″NA″ cmd.Side=″Buy″ cmd.ExecInst=″None″ cmd.MaxFloor=″0″ cmd.CustomerOrFirm=″PriorityCustomer″ step=″Send Request″ /> <!-- New Parent Order Created --> <WaitForEvent cache=″Order″ timeout=″100″ step=″Step 1.0″> <VerifyEvent validate=″Account==cmd.Account and UserID==cmd.UserID and ClOrdID==cmd.ClOrdID and OrdType==cmd.OrdType and Price==cmd.Price and OrderQty==cmd.OrderQty and TimeInForce==cmd.TimeInForce and ProductID==cmd.ProductID and Side==cmd.Side and State==′New′ and CumQty==0 and NumFills==0 and NumChildren==0 and ParentID==′NA′″ > Event Data InValid(Create)*****″ Event Data Valid(Create)*****″ printdata=″true″ step=″1.0″ /> </WaitForEvent> <!-- New Child Order Created --> <WaitForEvent cache=″Order″ timeout=″100″ step=″Step 2.0″> <VerifyEvent validate=″Account==cmd.Account and UserID==cmd.UserID and ClOrdID like ′NA′ and OrdType==cmd.OrdType and Price==cmd.Price and OrderQty==cmd.OrderQty and TimeInForce==cmd.TimeInForce and ProductID==cmd.ProductID and Side==cmd.Side and State==′New′ and CumQty==0 and NumFills==0 and NumChildren==0 and ParentID!=′NA′″ > Event Data InValid(Create)*****″ Event Data Valid(Create)*****″ printdata=″true″ step=″2.0″ /> </WaitForEvent> <!-- Parent Order Updated with Child Order --> <WaitForEvent cache=″Order″ timeout=″100″ step=″Step 3.0″> <VerifyEvent validate=″Account==cmd.Account and UserID==cmd.UserID and ClOrdID==cmd.ClOrdID and OrdType==cmd.OrdType and Price==cmd.Price and OrderQty==cmd.OrderQty and TimeInForce==cmd.TimeInForce and ProductID==cmd.ProductID and Side==cmd.Side and State==′New′ and CumQty==0 and NumFills==0 and NumChildren==1 and ParentID==′NA′″ > Event Data InValid(Create)*****″ Event Data Valid(Create)*****″ printdata=″true″ step=″3.0″ /> </WaitForEvent> <!-- Response --> <WaitForResponse step=″Step 4.0″> <!-- <VerifyResponse validate = ″Action == ′OK′ and MID == ′CreateOrderTest1′″ --> <VerifyResponse validate=″Action == ′OK′ Response InValid(Create)*****″ Response Valid(Create)*****″ step=″4.0″ /> </WaitForResponse> <!-- Quote Created --> <WaitForEvent cache=″Quote″ timeout=″100″ ttl=″1000″ step=″Step 5.0″> <VerifyEvent validate=″ProductID==cmd.ProductID and Price==cmd.Price and Qty==cmd.OrderQty and TIF==cmd.TimeInForce and State == ′Open′″ Event Data InValid(Create)*****″ Event Data Valid(Create)*****″ printdata=″true″ step=″5.0″ /> </WaitForEvent> </Script> </VerifyScript> - While the above has been described with reference to adding information to caches, the
verification application 100 may further request that information in the caches be updated on behalf of theverification application 100. As shown in the partial script or action file below, the conditions of an existing order in the “Order” cache are replaced using a “Replace” action. -
<VerifyScript> <Script name=″Replace_Order4″ step=″0″> <SendRequest cmd.TxnType=″Txn″ cmd.MID=″ReplaceOrderTest1″ cmd.Cache=″Order″ cmd.Action=″Replace″ cmd.UserID=″21″ cmd.OrigClOrdID=″QA1″ cmd.ClOrdID=″Replace_QA1″ cmd.ProductID=″110625995″ cmd.OrderQty=″800″ cmd.Side=″Buy″ cmd.TimeInForce=″Day″ cmd.OrdType=″Limit″ step=″Send Request″ /> <!-- Response --> <WaitForResponse step=″Step 1.0″> <VerifyResponse validate=″TxnType ==′Exception′ and MID ==′ReplaceOrderTest1′″ Response InValid(Create)*****″ Response Valid(Create)*****″ step=″1.0″ /> </WaitForResponse> </Script> </VerifyScript> - As described above, the
verification application 100 may request from the AUT 105 a schema that describes the types of commands that can be sent to theAUT 105 in order to allow theverification application 100 to test a portion of a script before actually sending the resulting command to theAUT 105. Exemplary schema for the Order cache and the Quote cache are shown below in the attached appendix which forms a portion of this specification. - In addition to the
verification application 100 being able to convert data tables and script templates to a series of scripts and/or action files, it is alternatively possible to utilize a stand-alone tool to generate scripts and/or action files from the data tables and script templates such that the script and/or action file generation functions can be separated from the script processing functions of theverification application 100. Furthermore, the above discussion of data tables should be understood by those of ordinary skill in the art to include any mechanism for readings “rows” of data that are to be processed to create the scripts from the templates. The creation of such data tables can be from spreadsheets, delimited files (e.g., using commas or tabs as delimiters), databases, or XML files. Preferably the creation of the data tables allows the fields corresponding to each column to be specified dynamically at run-time (e.g., using header rows), but fixed formats can be utilized as well (e.g., when reading from a fixed structure record of a database). Alternatively, each row of data to be applied to a template can be created from a series of records (e.g., of a database) by “joining” them with a common identifier (e.g., a row number). For example, as shown inFIG. 5 , a ScriptName table can be created that stores rows of test names, script names and the order that the scripts are supposed to be run in as part of the script to be created. A KeyValuePair table can also be created that stores the parameters that are to be used with each test when creating a script from a template. As shown inFIG. 6 , each of the rows of the ScriptName table can be read, in order. Then for each row in the ScriptName table a set of parameters can be loaded from the KeyValuePair table to build the parameters to be used in the corresponding template. The data fromFIGS. 6 and 7 can be used (with additional key/value pairs) to generate test cases as shown inFIG. 3 which can then be processed as described above with respect toFIGS. 3 and 4 . - In one embodiment of the above-described system, the
verification application 100 is integrated with or works in conjunction with a source code control system (SCCS). In that embodiment, when source code has been checked into the SCCS and once theAUT 105 has been built (compiled, linked, programming scripts generated, etc.) and launched, theverification application 100 is automatically launched so that an action script or a specified set of action scripts is executed by theverification application 100. The results of the action script(s) are then analyzed for errors (e.g., by looking for specific tags in the typed-data which represents the results). If there are errors, theverification application 100 can send an error report (e.g., via email, text message, or a post to a wiki) to the quality assurance contact (e.g., specified in the “make” file) and/or to the person that most recently checked back in the code. - The above method and system can be implemented on a digital computer by causing a processor (e.g., a microprocessor (such as 80×86 compatible processor, a 680×0 compatible processor or a RISC processor), a system on a chip, or a custom processor) of the digital computer to execute computer code instructions contained in a digital computer memory (e.g., volatile memory such as RAM or non-volatile memory such as flash memory or ferroelectric memory) of the digital computer. The computer code instructions typically are read from a non-volatile storage device (e.g., an optical drive (such as a CD-ROM drive, DVD-drive or a BluRay Drive), a magnetic drive (such as a magnetic hard drive) or a semiconductor-based drive (such as a solid state drive)) into the digital computer memory (e.g., under the control of an operating system) before execution of the computer code instructions.
- As described above, the events and commands described herein can be sent and received using a number of messaging protocols (e.g., TCP/IP, 29 West and/or Financial Information Exchange). When communicating between the
verification application 100 and anAUT 105 on a separate machine, the system can utilize any computer-based communications adapter (e.g., adapters utilizing Ethernet, WiFi (including, but not limited to the 802.11 family of protocols), WiMax, and cellular-based packet communication). - While certain configurations of structures have been illustrated for the purposes of presenting the basic structures of the present invention, one of ordinary skill in the art will appreciate that other variations are possible which would still fall within the scope of the appended claims.
Claims (10)
1. A system for testing an application under test, comprising:
a digital memory for storing computer code instructions independent of computer code instructions of the application under test;
a processor for controlling the system by reading and executing the computer code instructions stored in the digital memory independent of computer code instructions of the application under test, the processor reading and executing the computer code instructions stored in the digital memory performing the steps of:
reading an action file representing actions to be performed on the application under test;
requesting that the application under test perform the actions represented in the action file;
waiting for results that are produced by requesting that the application under test perform the actions represented in the action file; and
generating a results file representing results of at least one test of the action file.
2. The system as claimed in claim 1 , wherein the processor reading and executing the computer code instructions stored in the digital memory further performs the steps of:
reading a data table comprising a series of data row;
generating, for each row in the series of rows, a script from a template identified in the each row, utilizing calling parameters specified in the each row.
3. The system as claimed in claim 1 , wherein the processor reading and executing the computer code instructions stored in the digital memory further performs the steps of:
receiving a signal from a source code control system indicating that the application under test is to be built after at least one of changes to and additions to source code used to build the application under test;
building application under test;
starting the application under test;
automatically running the action file to test the application under test; and
communicating that an error occurred if results of running the application under test indicate that at least one test in the action file returned an unexpected result.
4. The system as claimed in claim 1 , wherein the action file is a typed-data data file.
5. The system as claimed in claim 1 , wherein the results file is a typed-data data file.
6. The system as claimed in claim 1 , wherein the action file contains at least one reference to a script file to be run at a time specified by the action file.
7. The system as claimed in claim 6 , wherein the action file contains at least one parameter that overrides a default value in the script file to be run at the time specified by the action file.
8. The system as claimed in claim 1 , wherein the processor reading and executing the computer code instructions stored in the digital memory further performs the steps of:
requesting a schema of valid commands that can be accepted by the application under test; and
verifying that commands of the action file are valid based on the schema of valid commands before sending the commands to the application under test.
9. The system as claimed in claim 8 , wherein requesting a schema of valid commands comprises requesting from the application under test the schema of valid commands.
10. The system as claimed in claim 1 , wherein generating a results file representing results of at least one test of the action file comprises generating a hierarchical results file representing results sub-tests hierarchically with tests containing the sub-tests.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/261,788 US20140237295A1 (en) | 2011-04-12 | 2014-04-25 | System and method for automating testing of computers |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US13/084,952 US8719795B2 (en) | 2011-04-12 | 2011-04-12 | System and method for automating testing of computers |
| US14/261,788 US20140237295A1 (en) | 2011-04-12 | 2014-04-25 | System and method for automating testing of computers |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/084,952 Continuation US8719795B2 (en) | 2011-04-12 | 2011-04-12 | System and method for automating testing of computers |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20140237295A1 true US20140237295A1 (en) | 2014-08-21 |
Family
ID=47007363
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/084,952 Active 2032-02-12 US8719795B2 (en) | 2011-04-12 | 2011-04-12 | System and method for automating testing of computers |
| US14/261,788 Abandoned US20140237295A1 (en) | 2011-04-12 | 2014-04-25 | System and method for automating testing of computers |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/084,952 Active 2032-02-12 US8719795B2 (en) | 2011-04-12 | 2011-04-12 | System and method for automating testing of computers |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US8719795B2 (en) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130159974A1 (en) * | 2011-12-15 | 2013-06-20 | The Boeing Company | Automated Framework For Dynamically Creating Test Scripts for Software Testing |
| US20140258784A1 (en) * | 2013-03-08 | 2014-09-11 | Infineon Technologies Ag | Machine and Methods for Reassign Positions of a Software Program Based on a Fail/Pass Performance |
| CN106201895A (en) * | 2016-07-25 | 2016-12-07 | 东软集团股份有限公司 | Application testing method and device |
Families Citing this family (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8473916B2 (en) * | 2011-01-25 | 2013-06-25 | Verizon Patent And Licensing Inc. | Method and system for providing a testing framework |
| US10417314B2 (en) * | 2012-06-14 | 2019-09-17 | Open Text Sa Ulc | Systems and methods of a script generation engine |
| US9867043B2 (en) * | 2012-08-28 | 2018-01-09 | Visa International Service Association | Secure device service enrollment |
| KR101408870B1 (en) * | 2012-11-06 | 2014-06-17 | 대구교육대학교산학협력단 | Apparatus and method for multi level tast case generation based on multiple condition control flow graph from unified modeling language sequence diagram |
| US10261611B2 (en) | 2012-12-03 | 2019-04-16 | Apkudo, Llc | System and method for objectively measuring user experience of touch screen based devices |
| US9578133B2 (en) | 2012-12-03 | 2017-02-21 | Apkudo, Llc | System and method for analyzing user experience of a software application across disparate devices |
| US9075781B2 (en) * | 2013-03-15 | 2015-07-07 | Apkudo, Llc | System and method for coordinating field user testing results for a mobile application across various mobile devices |
| IN2013DE02948A (en) * | 2013-10-04 | 2015-04-10 | Unisys Corp | |
| US9283672B1 (en) | 2014-12-11 | 2016-03-15 | Apkudo, Llc | Robotic testing device and method for more closely emulating human movements during robotic testing of mobile devices |
| CN107135188B (en) * | 2016-02-29 | 2020-09-25 | 阿里巴巴集团控股有限公司 | Business realization method, device and system of FIX protocol for financial information exchange |
| US10710239B2 (en) | 2018-11-08 | 2020-07-14 | Bank Of America Corporation | Intelligent control code update for robotic process automation |
| CN110287095A (en) * | 2019-05-20 | 2019-09-27 | 深圳壹账通智能科技有限公司 | A kind of automated testing method, device and storage medium |
| US10963242B2 (en) * | 2019-06-24 | 2021-03-30 | Hartford Fire Insurance Company | Intelligent software agent to facilitate software development and operations |
| US12436873B2 (en) * | 2020-12-31 | 2025-10-07 | Fidelity Information Services, Llc | Systems and methods for global automation and testing services |
| CN116150066B (en) * | 2023-01-11 | 2023-07-04 | 南京宏泰半导体科技股份有限公司 | Bus data processing method and system for integrated circuit test |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030070119A1 (en) * | 2001-10-10 | 2003-04-10 | Dallin Michael Dean | Method and system for testing a software product |
| US20040148590A1 (en) * | 2003-01-27 | 2004-07-29 | Sun Microsystems, Inc., A Delaware Corporation | Hierarchical test suite |
| US20080072100A1 (en) * | 2006-06-05 | 2008-03-20 | International Business Machines Corporation | Generating functional test scripts |
| US20090037914A1 (en) * | 2007-07-31 | 2009-02-05 | Bryan Christopher Chagoly | Automatic configuration of robotic transaction playback through analysis of previously collected traffic patterns |
| US20090307763A1 (en) * | 2008-06-05 | 2009-12-10 | Fiberlink Communications Corporation | Automated Test Management System and Method |
| US20110123973A1 (en) * | 2008-06-06 | 2011-05-26 | Sapient Corporation | Systems and methods for visual test authoring and automation |
| US20110252073A1 (en) * | 2010-04-06 | 2011-10-13 | Justone Database, Inc. | Apparatus, systems and methods for data storage and/or retrieval based on a database model-agnostic, schema-agnostic and workload-agnostic data storage and access models |
| US20120192153A1 (en) * | 2011-01-25 | 2012-07-26 | Verizon Patent And Licensing Inc. | Method and system for providing a testing framework |
Family Cites Families (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6944848B2 (en) * | 2001-05-03 | 2005-09-13 | International Business Machines Corporation | Technique using persistent foci for finite state machine based software test generation |
| US7917895B2 (en) * | 2001-07-27 | 2011-03-29 | Smartesoft, Inc. | Automated software testing and validation system |
| US7055065B2 (en) * | 2001-09-05 | 2006-05-30 | International Business Machines Corporation | Method, system, and computer program product for automated test generation for non-deterministic software using state transition rules |
| US8682636B2 (en) * | 2002-08-30 | 2014-03-25 | Sap Ag | Non-client-specific testing of applications |
| US7451455B1 (en) * | 2003-05-02 | 2008-11-11 | Microsoft Corporation | Apparatus and method for automatically manipulating software products |
| US7581212B2 (en) * | 2004-01-13 | 2009-08-25 | Symphony Services Corp. | Method and system for conversion of automation test scripts into abstract test case representation with persistence |
| US7478365B2 (en) * | 2004-01-13 | 2009-01-13 | Symphony Services Corp. | Method and system for rule-based generation of automation test scripts from abstract test case representation |
| US7337432B2 (en) * | 2004-02-03 | 2008-02-26 | Sharp Laboratories Of America, Inc. | System and method for generating automatic test plans for graphical user interface applications |
| US20050268285A1 (en) * | 2004-05-25 | 2005-12-01 | International Business Machines Corporation | Object oriented GUI test automation |
| US20070092069A1 (en) * | 2005-10-21 | 2007-04-26 | Epiphany, Inc. | Method and system for testing enterprise applications |
| US8239831B2 (en) * | 2006-10-11 | 2012-08-07 | Micro Focus (Ip) Limited | Visual interface for automated software testing |
| US7934201B2 (en) * | 2006-10-17 | 2011-04-26 | Artoftest, Inc. | System, method, and computer readable medium for universal software testing |
| US7793154B2 (en) * | 2006-11-30 | 2010-09-07 | International Business Machines Corporation | Method and implementation for automating processes using data driven pre-recorded transactions |
| US20090083325A1 (en) * | 2007-09-24 | 2009-03-26 | Infosys Technologies, Ltd. | System and method for end to end testing solution for middleware based applications |
| US8347147B2 (en) * | 2009-03-09 | 2013-01-01 | Wipro Limited | Lifecycle management of automated testing |
| US8707263B2 (en) * | 2010-04-19 | 2014-04-22 | Microsoft Corporation | Using a DSL for calling APIS to test software |
| US8966447B2 (en) * | 2010-06-21 | 2015-02-24 | Apple Inc. | Capturing and displaying state of automated user-level testing of a graphical user interface application |
| US8677320B2 (en) * | 2011-04-06 | 2014-03-18 | Mosaic, Inc. | Software testing supporting high reuse of test data |
| US8397114B2 (en) * | 2011-06-01 | 2013-03-12 | Morgan Stanley | Automated regression testing intermediary |
-
2011
- 2011-04-12 US US13/084,952 patent/US8719795B2/en active Active
-
2014
- 2014-04-25 US US14/261,788 patent/US20140237295A1/en not_active Abandoned
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030070119A1 (en) * | 2001-10-10 | 2003-04-10 | Dallin Michael Dean | Method and system for testing a software product |
| US20040148590A1 (en) * | 2003-01-27 | 2004-07-29 | Sun Microsystems, Inc., A Delaware Corporation | Hierarchical test suite |
| US20080072100A1 (en) * | 2006-06-05 | 2008-03-20 | International Business Machines Corporation | Generating functional test scripts |
| US20090037914A1 (en) * | 2007-07-31 | 2009-02-05 | Bryan Christopher Chagoly | Automatic configuration of robotic transaction playback through analysis of previously collected traffic patterns |
| US20090307763A1 (en) * | 2008-06-05 | 2009-12-10 | Fiberlink Communications Corporation | Automated Test Management System and Method |
| US20110123973A1 (en) * | 2008-06-06 | 2011-05-26 | Sapient Corporation | Systems and methods for visual test authoring and automation |
| US20110252073A1 (en) * | 2010-04-06 | 2011-10-13 | Justone Database, Inc. | Apparatus, systems and methods for data storage and/or retrieval based on a database model-agnostic, schema-agnostic and workload-agnostic data storage and access models |
| US20120192153A1 (en) * | 2011-01-25 | 2012-07-26 | Verizon Patent And Licensing Inc. | Method and system for providing a testing framework |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20130159974A1 (en) * | 2011-12-15 | 2013-06-20 | The Boeing Company | Automated Framework For Dynamically Creating Test Scripts for Software Testing |
| US9117028B2 (en) * | 2011-12-15 | 2015-08-25 | The Boeing Company | Automated framework for dynamically creating test scripts for software testing |
| US20140258784A1 (en) * | 2013-03-08 | 2014-09-11 | Infineon Technologies Ag | Machine and Methods for Reassign Positions of a Software Program Based on a Fail/Pass Performance |
| US9003234B2 (en) * | 2013-03-08 | 2015-04-07 | Infineon Technologies Ag | Machine and methods for reassign positions of a software program based on a fail/pass performance |
| CN106201895A (en) * | 2016-07-25 | 2016-12-07 | 东软集团股份有限公司 | Application testing method and device |
Also Published As
| Publication number | Publication date |
|---|---|
| US20120266142A1 (en) | 2012-10-18 |
| US8719795B2 (en) | 2014-05-06 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8719795B2 (en) | System and method for automating testing of computers | |
| US8874479B2 (en) | Systems and methods for testing a financial trading system | |
| US7643982B2 (en) | Debugging prototyped system solutions in solution builder wizard environment | |
| US8015039B2 (en) | Enterprise verification and certification framework | |
| US7676816B2 (en) | Systems and methods for integrating services | |
| US20060265475A9 (en) | Testing web services as components | |
| US7620885B2 (en) | Automatic generation of documentation for component-based computing solution | |
| US8386419B2 (en) | Data extraction and testing method and system | |
| US7539936B2 (en) | Dynamic creation of an application's XML document type definition (DTD) | |
| US7054881B2 (en) | Method and system for reporting standardized and verified data | |
| US7536606B2 (en) | Error detection in web services systems | |
| US20090038010A1 (en) | Monitoring and controlling an automation process | |
| US8301720B1 (en) | Method and system to collect and communicate problem context in XML-based distributed applications | |
| US20080235041A1 (en) | Enterprise data management | |
| CN107977308A (en) | interface test method and device | |
| US20080301702A1 (en) | Automated generation of different script versions | |
| US8661414B2 (en) | Method and system for testing an order management system | |
| CN112540924A (en) | Interface automation test method, device, equipment and storage medium | |
| US11301246B2 (en) | Automatically generating continuous integration pipelines | |
| CN113094281B (en) | Test method and device for hybrid App | |
| CN118643812B (en) | A method and system for automatically filling in a report | |
| KR100969877B1 (en) | Quality Test Automation System | |
| Bluemke et al. | Tool for automatic testing of web services | |
| Sneed | Testing web services in the cloud | |
| Acharya | Online Crime Reporting System Project |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |