WO2012073197A1 - Methods and systems for implementing a test automation framework for gui based software applications - Google Patents
Methods and systems for implementing a test automation framework for gui based software applications Download PDFInfo
- Publication number
- WO2012073197A1 WO2012073197A1 PCT/IB2011/055376 IB2011055376W WO2012073197A1 WO 2012073197 A1 WO2012073197 A1 WO 2012073197A1 IB 2011055376 W IB2011055376 W IB 2011055376W WO 2012073197 A1 WO2012073197 A1 WO 2012073197A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- test
- window
- gui
- data
- software application
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Prevention of errors by analysis, debugging or testing of software
- G06F11/3668—Testing of software
- G06F11/3672—Test management
- G06F11/3688—Test management for test execution, e.g. scheduling of test suites
Definitions
- the present invention relates to a test automation framework for GUI based software applications.
- Test automation is the creation of software to test other software. Normally, this is achieved by means of a software tool or script.
- the basis of most test automation tools and scripts is a record and playback engine. That is, the software tool records a manual testing process which may then be played back repeatedly e.g. with different input data values. It is usually necessary to modify the base script (recording) e.g. to cater for different data, the insertion of verification statements at various points, synchronisation and other enhancements. This is essentially a programming exercise, but most organisations try to use (under-skilled) testing professionals to do the work. There are 2 main tool types: those that are general purpose i.e.
- test automation may be used in many application environments, and those that are specific to a single application environment e.g. web applications.
- the most common application of test automation is in "regression" testing i.e. the testing of a new version of the software to ensure that all the old functionality (of the previous version) still works i.e. that it has not been broken by the latest changes.
- regression testing i.e. the testing of a new version of the software to ensure that all the old functionality (of the previous version) still works i.e. that it has not been broken by the latest changes.
- a method of testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object
- the method comprising: providing a test script for the GUI based software application to be tested; providing test data and verification data for the GUI based software application that can be accessed by the test script; executing a test process controlled by the test script, comprising the steps of: navigating to each window which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin; and/or populating each window which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
- the method comprises providing a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
- the test data set may further include data for launching the application, handling expected and unexpected errors and linking the method to a test management system.
- the method comprises defining a reference indicator for each GUI object type used in the GUI based software application. In an embodiment, the method comprises determining the number of GUI objects per window. ln one version, the method of determining the number of GUI objects per window comprises using a mask, which can be provided in the test data set, to indicate the number of GUI objects per window, the mask comprising at least one reference indicator.
- the reference indicators are separated from each other using a predetermined symbol.
- the method of determining the number of GUI objects per window comprises using a counter, which can be provided in the test data set, to indicate the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
- test script itself determines how many GUI objects there are in the test data.
- the method comprises allowing a software tester to amend the test data set by: adding a new GUI object and associated GUI object type to an existing window, should the GUI software application being tested require a new GUI object in an existing window; and/or adding a new window having at least one GUI object and associated GUI object type, should the GUI software application being tested require a new GUI object in an existing window; and/or editing the test data to confirm that the software application behaves as expected with different test data; and/or changing the GUI object type for a particular GUI object to another GUI object type, thereby allowing the software application to be tested in many different ways without having to edit the test script.
- the method comprises logging the test results and indicating, for each verification, whether it passed, in which case the software application is behaving as expected, or whether it failed, in which case the software application did not behave as expected.
- the test script comprises a script for a Prototype Test that comprises looping once through all three steps of the test process, namely navigating, populating and verifying, for each window.
- the test script comprises a script for a Regression Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each window, and thereafter looping through all three steps again, for each window, using another test data set.
- the test script comprises a script for a Functional Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each function to be tested in a window, before moving onto the next window.
- the Functional Test script comprises a plurality of populating and verifying steps for a window.
- no navigating step is required at a particular point or points in the test sequence if, for example, the test process is at the correct point for the test process to start or continue.
- no populating step is required at a particular point or points in the test sequence, if, for example, the mere existence of a window is to be verified.
- no verification step is required at a particular point or points in the test sequence when there is nothing to check on a particular window e.g. if a user is logging in as a means of getting to somewhere else, then no verification would be required on the login.
- a system for testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object
- the system comprising: memory storing a test script for the GUI based software application to be tested; memory storing test data and verification data for the GUI based software application that can be accessed by the test script; and a processor to execute a test process controlled by the test script, the processor being programmed to: navigate to each window and executing the steps necessary to move the test process to the point from which the test process will begin; and/or populate each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verify the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
- the system comprises a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
- the test data set may further include data for launching the application, handling expected and unexpected errors and linking the method to a test management system.
- the test data set comprises a mask to enable the processor to determine the number of GUI objects per window, the mask comprising at least one reference indicator, each reference indicator being associated with a GUI object type used in the GUI based software application.
- the reference indicators in the mask are separated from each other using a predetermined symbol, such as, for example, an "
- the test data sheet comprises a counter to enable the processor to determine the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
- the test script itself determines how many GUI objects there are in the test data.
- a method of implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the method comprising using any one, or a combination, of three basic steps for each test process, the basic steps comprising: Navigate, which comprises executing the steps necessary to move the test process to the next window;
- Populate which comprises entering test data into the GUI objects and/or activating other GUI objects on that window to achieve a specified objective
- Verify which comprises carrying out the checks necessary to confirm that the software application is behaving as expected.
- the method comprises writing scripts for each software application to cater for the three defined basic steps.
- the scripts are dependent upon the nature of the development environment and the types of GUI object used. As indicated above, if this does not change then there is no need to amend the scripts.
- the scripts may be any form of computer program, compiled or interpreted, and may include procedures such as sub-routines, actions, functions, modules etc.
- the method comprises compiling at least one test data set, whether a test data sheet or sheets or a test data array, the scripts getting their data, including window names, object names, input data and expected results from at least one test data set.
- a script for a Prototype Test comprises looping once through all three basic steps, for each window.
- a script for a Regression Test comprises looping through all three basic steps for each window, and thereafter looping through all three basic steps again, for each window, using another test data set.
- a script for a Functional Test comprises looping through all three basic steps, one loop for each window for each function to be tested in a window, before moving onto the next window.
- a system for implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the system comprising any one, or a combination, of the following three basic modules for each test process: a Navigate module, which executes the steps necessary to move the test process to the next window; a Populate module, which enters test data into the GUI objects and/or activates other GUI objects on that window to achieve a specified objective; and a Verify module, which carries out the checks necessary to confirm that the software application is behaving as expected.
- a Navigate module which executes the steps necessary to move the test process to the next window
- a Populate module which enters test data into the GUI objects and/or activates other GUI objects on that window to achieve a specified objective
- a Verify module which carries out the checks necessary to confirm that the software application is behaving as expected.
- GUI graphical user interface
- Figure 1 illustrates the program flow for a Prototype Test, in which the process navigates to the first window, populates the window if required and then verifies, for example, that there is no error message, with this process continuing for every window in the system;
- Figure 2 illustrates the program flow for a Regression Test, in which the process navigates to the first window, populates the window with test data and then verifies whatever needs to be verified according to the test objective, with this process continuing until each window has been populated and verified, with this entire process then being repeated as many times as is required using, for example, different data and/or different test objectives; illustrates the program flow for a Functional Test, in which the process navigates to the first window, populates the window with test data and then verifies whatever needs to be verified according to the test objective, with this process then continuing with further combinations of populate and verify for the window, with the process then moving onto the next window; illustrates an alternative flow for any of the above, where intermediate verification is required prior to completion of the populating step; shows some pseudo code, using a mask, for the populating step; shows some pseudo code which uses a counter instead of a mask, with each data field being preceded by a reference to a GUI object type; provides an example of test data used to drive the test, which makes use of a mask, and
- Figure 12 shows a schematic flow diagram representing a method of testing a GUI based software application comprising at least one window having at least one GUI object
- Figure 13 shows a schematic block diagram of a system for testing a GUI based software application comprising at least one window having at least one GUI object.
- RUMBA which is an acronym for Rubric's Unified Method for Better Automation
- RUMBA which summarises the invention, is based on the premise that all test processes may be divided into sequences of three steps (each of which need not occur in every iteration), as follows:
- Verify which means to carry out the checks necessary to confirm that the software is behaving as expected.
- Figure 1 illustrates the program flow 10 for a Prototype Test
- the navigate 12, populate 14, verify 16 sequence is applied only once for each window or screen in the application.
- the program flow process 10 for the Prototype Test comprises navigating 12 to the first window and checking, for example, its name, and then reporting the pass/fail status, together with the actual and expected results (in the case of a fail) to the test results, which will be described in more detail further below with specific reference to Figures 10 and 1 1 .
- the program flow process 10 then populates 14 that window (for example, with the minimum amount of data necessary to enable the process to continue) and then verifies 16, for example, that there is no error message. The result of this verification is recorded in the test results. The process continues for every window in the system, as indicated by arrow 18. If there is a fatal error, the system re-starts the AUT (Application Under Test) and continues with the next window.
- a Regression Test 20 there is typically a reduced sequence of screens i.e. not all of them, which are needed to complete a specific process and this sequence is repeated numerous times with different test data sets of test data.
- the Regression Test process 20 navigates to the first window, as indicated by block 22, populates that window with test data, as indicated by block 24, and then verifies whatever needs to be verified according to the test objective, as indicated by block 26. The result of this verification is recorded in the test results.
- the process then continues to the next screen, as indicated by arrow 28 in the process being regression tested until all the screens in the process have been populated and verified. The whole process is then repeated as many times, for example, with different data and/or different test objectives as indicated by arrow 30.
- the Functional Test process 40 is a variant of the above where multiple populate/verify sequences occur for each screen before navigation to the next screen.
- the program flow 40 for a Functional Test comprises navigating to the first window, as indicated by block 42, populating that window with test data, as indicated by block 44, and then verifying whatever needs to be verified according to the test objective, as indicated by block 46. The result of this verification is recorded in the test results.
- the process then continues with further combinations of populating and verifying for that window, as indicated by arrow 48. Eventually it moves to the next window and the process is repeated, for example, with different data and/or different test objectives as indicated by arrow 50.
- the navigate module may invoke the application. It may then use a mask in a data table, similar to that in Figures 5 and 7, to determine how many steps are required to reach each Window and which GUI objects are required to do so.
- a counter may be used together with reference to the object type in each data field, as shown in more detail in Figures 6 and 8.
- a counter or a reference to the object associated with the data may be used instead of using a mask to determine which types of GUI object are to be activated.
- Case statement may be used to identify the action required for the appropriate GUI object. For example, Case “1 " in Figure 5 refers to an Edit field, Case “2" refers to a Button, Case “3" refers to a List etc.
- this module will not execute any steps.
- the populate module is then executed. This populates the window with the test data that is stored in a sheet of the data table (such as the test data table 58 in Figure 7).
- the input test data 60 (in particular, 60.1 , 60.2, 60.3 and 60.4 in Figure 7)
- the GUI object name 62 in particular, 62.1 , 62.2, 62.3 and 62.4 in Figure 7) are stored in pairs of columns in the data table 58.
- the GUI object name "Date” has associated input test data "1 1/1 1/1 1".
- the GUI object names may be stored in a separate sheet to avoid duplication. Referring now to Figure 9, a list of possible GUI object types 70 (70.1 to 70.9), their associated reference indicators 72 (72.1 to 72.9) and examples of the object types 70 is shown.
- the test data table 58 includes a column 66 that provides the developer's name for a particular window, e.g. Flights 64, Fax and Preview.
- the table 58 further includes a mask 68 for each window 66, the mask determining which type of GUI object 62 is activated and in which order. For example, in the Flights window 64, the mask is 4/3/3/2, which means that the test designer wishes to populate a special edit field (reference indicator 4, with reference to Figure 9), then two list objects (two reference indicators 3, again with reference to Figure 9), and then a push-button (reference indicator 2, with reference to Figure 9).
- Each line in the data table 58 represents the sequence of actions for a specific window. The data exclusively drives the order of events.
- a Case statement is used to identify the action required for the appropriate GUI object.
- the input data is stored in the same order.
- a GUI object does not need any data, for example, a push button
- the relevant data table cell is left blank (as confirmed by the blank space in column 60.4 in the Flights window 64 in Figure 7).
- the populate step may include checkpoints, synchronization points, functions and data output commands. In an embodiment, these are identified by using letter reference indicators instead of number reference indicators in the mask 66. That is, numbers are used to identify normal GUI input actions and letters identify these alternative commands. Where letters are used, the object data sheet identifies the object and the populate data sheet identifies the expected result (for a verification).
- the data output commands may use a separate data sheet for ease of administration.
- the output data may be used for future inputs or future expected results.
- One line in the data table may be used for each window.
- Figure 5 shows some of the pseudo code for the populate sequence.
- the code shown divides the mask into individual components and counts how many components there are (as described above with reference to the example shown in Figure 7). It then activates the appropriate GUI object on the window with the appropriate data 60 from the data table 58.
- the example illustrates a Microsoft Windows based application, but the method may be applied to any other GUI environment including legacy systems with "dumb" terminals.
- FIG 6 shows some pseudo code where, instead of using a mask, each data field may be preceded by a reference to a GUI object type, for example, Mdata, where 1 indicates that the GUI object is, with reference to Figure 9, an edit field.
- a counter may identify the number of objects to be used in any specific window.
- the "1/" in the data column may refer to the position of the object in the object table.
- a vertical character i.e. T
- T vertical character
- the verify module may also be structured like the navigate and populate modules, that is, using a mask (or counter and references) and a case statement.
- the verifications required in the populate module may be handled in the verify module. Since this is the module where the checking of the status of the AUT is carried out, this module may also contain the related facilities of recovery from serious failures of the AUT and/or the recording, for example, in the Smoke Test, of any unexpected changes that are detected in the AUT.
- the mask is based on the concept that each GUI object has a code number (or reference indicator) assigned to it. For example: 1 for an edit field; 2 for a button; 3 for a list and so on (again with reference to Figure 9).
- a counter may be used together with reference to the object type in each data field. Such reference may follow the convention illustrated in the preceding sentence e.g. 1 for an edit field etc. Alternatively, this may be determined programmatically.
- test results summary (of a regression test) is shown.
- the left window 80 of the test results summary lists the various windows 82 that the test script was programmed to run through.
- the ticked sections such as those represented by arrows 84 and 86, confirm that the relevant windows/processes operated as expected.
- the crossed sections such as that shown by arrow 88 in Figure 1 1 , indicate that the associated test step has failed but the script has recovered and continued onto the next step.
- the summary window 90 on the right of the screen confirms that after 1 iteration 92, the AUT, overall, failed 94. In particular, there were 29 passes 96 (i.e. ticked sections), 2 fails 98 and 8 warnings 100.
- the window 102 displays certain information of the fail.
- the total value of the tested transaction came to $354, whereas according to the verification data in the test data set (e.g. the table in Figure 7), this amount should have been $214.94, as indicated by arrow 104.
- the tester can easily see which section of the AUT needs to be examined and amended.
- Figure 12 shows a method 120 of testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object.
- the method 120 comprises providing a test script for the GUI based software application to be tested, as indicated by block 122.
- the method 120 further comprises providing test data and verification data for the GUI based software application that can be accessed by the test script, as indicated by block 124.
- the method 120 then executes a test process controlled by the test script, as indicated by block 126.
- the test process comprises navigating to each window, as indicated by block 128, which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin, and/or populating each window, as indicated by block 130, which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective, and/or verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data, as indicated by block 132.
- the method 120 comprises providing a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
- the method 120 comprises defining a reference indicator for each GUI object type used in the GUI based software application.
- the method 120 comprises determining the number of GUI objects per window.
- the method of determining the number of GUI objects per window comprises using a mask, which can be provided in the test data set, to indicate the number of GUI objects per window, the mask comprising at least one reference indicator.
- the reference indicators are separated from each other using a predetermined symbol.
- the method of determining the number of GUI objects per window comprises using a counter, which can be provided in the test data set, to indicate the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
- the method 120 comprises allowing a software tester to amend the test data set by: adding a new GUI object and associated GUI object type to an existing window, should the GUI software application being tested require a new GUI object in an existing window; and/or adding a new window having at least one GUI object and associated GUI object type, should the GUI software application being tested require a new GUI object in an existing window; and/or editing the test data to confirm that the software application behaves as expected with different test data; and/or changing the GUI object type for a particular GUI object to another GUI object type, thereby allowing the software application to be tested in many different ways without having to edit the test script.
- the method 120 comprises logging the test results and indicating, for each verification, whether it passed, in which case the software application is behaving as expected, or whether it failed, in which case the software application did not behave as expected.
- a system 140 for testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object.
- the system 140 comprises memory 142 to store a test script for the GUI based software application to be tested, memory 144 to store test data and verification data for the GUI based software application that can be accessed by the test script, and a processor 146 to execute a test process controlled by the test script.
- the processor 146 is programmed to execute a navigate module 148 for navigating to each window and executing the steps necessary to move the test process to the point from which the test process will begin, and/or execute a populate module 150 for populating each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective, and/or execute a verify module 152 for verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
- a navigate module 148 for navigating to each window and executing the steps necessary to move the test process to the point from which the test process will begin
- a populate module 150 for populating each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective
- a verify module 152 for verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
- the memory 144 stores a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
- the test data set comprises a mask to enable the processor to determine the number of GUI objects per window, the mask comprising at least one reference indicator, each reference indicator being associated with a GUI object type used in the GUI based software application.
- the reference indicators in the mask are separated from each other using a predetermined symbol.
- the test data sheet comprises a counter to enable the processor to determine the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
- the test script itself determines how many GUI objects there are in the test data.
- Test Management System It is normal when a Test Management System is used to execute an automated regression test to find that the Test Management System provides unhelpful metrics regarding the execution of such a test, for example, that one test has been run and that it has failed, despite the fact that the regression test carried out numerous iterations and verifications.
- Most implementations of this method have used a direct (Application Program Interface) link to the Test Management System creating dummy test instances for each iteration or verification that is carried out and providing a pass, fail or other status for such a test instance.
- the test requirement(s) that each verification/iteration has been created to meet has its corresponding status updated after execution. Other data may also be provided.
- the scripts contain no hardcoded data.
- the scripts contain no screen, page or object names.
- scripts may be used, maintained, applied and enhanced by relatively unskilled personnel.
- the technique is tool independent, provided that the tool has specific basic features.
- the present invention thus provides an innovative method of breaking any automated test script into three relatively small components that may be executed repeatedly with different data to test any GUI based software application.
- the method may include no record/playback components.
- the method may use a mask, a counter or programmatic steps to determine which types of GUI object are to be activated.
- the method may be used for intake tests, smoke tests, regression tests and functional tests.
- the method may include the ability to recover from high severity failures.
- the method may include reports on changes that have been made to the software's GUI.
- the method may interface to a Test Management system to provide metrics.
- an embodiment includes a script that converts the test design steps and test data from that Test Management System automatically into the (RUMBA) data sheets, thereby obviating the need for the user to create them manually.
- RMBA Test Management System
- other scripts are added to facilitate the creation of the data tables. This might include a script that automatically extracts object data into the object data sheet or one that uses the information in the object data sheet to facilitate the user's entry of data into the other datasheets.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
A method of testing a GUI based software application is provided, the application comprising at least one window, with each window comprising at least one GUI object. The method comprises providing a test script for the GUI based software application to be tested, providing test data and verification data for the GUI based software application that can be accessed by the test script, and executing a test process controlled by the test script. The test process in turn comprises navigating to each window which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin, and/or populating each window which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective, and/or verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
Description
METHODS AND SYSTEMS FOR IMPLEMENTING A TEST AUTOMATION
FRAMEWORK FOR GUI BASED SOFTWARE APPLICATIONS
FIELD OF THE INVENTION
The present invention relates to a test automation framework for GUI based software applications.
BACKGROUND TO THE INVENTION Test automation is the creation of software to test other software. Normally, this is achieved by means of a software tool or script. The basis of most test automation tools and scripts is a record and playback engine. That is, the software tool records a manual testing process which may then be played back repeatedly e.g. with different input data values. It is usually necessary to modify the base script (recording) e.g. to cater for different data, the insertion of verification statements at various points, synchronisation and other enhancements. This is essentially a programming exercise, but most organisations try to use (under-skilled) testing professionals to do the work. There are 2 main tool types: those that are general purpose i.e. may be used in many application environments, and those that are specific to a single application environment e.g. web applications. The most common application of test automation is in "regression" testing i.e. the testing of a new version of the software to ensure that all the old functionality (of the previous version) still works i.e. that it has not been broken by the latest changes.
Traditional drawbacks of test automation are:
1 . The effort needed to maintain the scripts every time that the software changes is immense and time consuming, often to the extent that once the updated automated scripts become available it is often too late in the testing process.
2. The cost of developing and maintaining the scripts and data often outweigh the benefits.
3. The skills needed to create the scripts and supervise the creation are in short supply (and are generally underestimated).
4. There is a need for the whole testing process to be changed to benefit from the high costs of purchasing the tool and creating and maintaining the scripts.
As a result of the above drawbacks, which the present invention aims to address, many organisations have abandoned test automation.
SUMMARY OF THE INVENTION
According to a first aspect of the invention there is provided a method of testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object, the method comprising: providing a test script for the GUI based software application to be tested;
providing test data and verification data for the GUI based software application that can be accessed by the test script; executing a test process controlled by the test script, comprising the steps of: navigating to each window which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin; and/or populating each window which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
In an embodiment, the method comprises providing a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable. The test data set may further include data for launching the application, handling expected and unexpected errors and linking the method to a test management system.
In an embodiment, the method comprises defining a reference indicator for each GUI object type used in the GUI based software application. In an embodiment, the method comprises determining the number of GUI objects per window.
ln one version, the method of determining the number of GUI objects per window comprises using a mask, which can be provided in the test data set, to indicate the number of GUI objects per window, the mask comprising at least one reference indicator.
In this version, if there are a plurality of GUI objects per window, the reference indicators are separated from each other using a predetermined symbol. In an alternate version, the method of determining the number of GUI objects per window comprises using a counter, which can be provided in the test data set, to indicate the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
In yet a further version, the test script itself determines how many GUI objects there are in the test data.
In an embodiment, the method comprises allowing a software tester to amend the test data set by: adding a new GUI object and associated GUI object type to an existing window, should the GUI software application being tested require a new GUI object in an existing window; and/or adding a new window having at least one GUI object and associated GUI object type, should the GUI software application being tested require a new GUI object in an existing window; and/or editing the test data to confirm that the software application behaves as expected with different test data; and/or
changing the GUI object type for a particular GUI object to another GUI object type, thereby allowing the software application to be tested in many different ways without having to edit the test script.
In an embodiment, the method comprises logging the test results and indicating, for each verification, whether it passed, in which case the software application is behaving as expected, or whether it failed, in which case the software application did not behave as expected.
In an embodiment, the test script comprises a script for a Prototype Test that comprises looping once through all three steps of the test process, namely navigating, populating and verifying, for each window.
In an embodiment, the test script comprises a script for a Regression Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each window, and thereafter looping through all three steps again, for each window, using another test data set.
In an embodiment, the test script comprises a script for a Functional Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each function to be tested in a window, before moving onto the next window.
In an embodiment, the Functional Test script comprises a plurality of populating and verifying steps for a window.
In a variation of the test process, no navigating step is required at a particular point or points in the test sequence if, for example, the test process is at the correct point for the test process to start or continue. In a further variation of the test process, no populating step is required at a particular point or points in the test sequence, if, for example, the mere existence of a window is to be verified.
In yet a further variation of the test process, no verification step is required at a particular point or points in the test sequence when there is nothing to check on a particular window e.g. if a user is logging in as a means of getting to somewhere else, then no verification would be required on the login.
According to a second aspect of the invention there is provided a system for testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object, the system comprising: memory storing a test script for the GUI based software application to be tested; memory storing test data and verification data for the GUI based software application that can be accessed by the test script; and a processor to execute a test process controlled by the test script, the processor being programmed to: navigate to each window and executing the steps necessary to move the test process to the point from which the test process will begin; and/or
populate each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verify the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
In an embodiment, the system comprises a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable. The test data set may further include data for launching the application, handling expected and unexpected errors and linking the method to a test management system.
In one version, the test data set comprises a mask to enable the processor to determine the number of GUI objects per window, the mask comprising at least one reference indicator, each reference indicator being associated with a GUI object type used in the GUI based software application.
In this version, if there are a plurality of GUI objects per window, the reference indicators in the mask are separated from each other using a predetermined symbol, such as, for example, an "|".
In an alternate version, the test data sheet comprises a counter to enable the processor to determine the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
ln yet a further version, the test script itself determines how many GUI objects there are in the test data.
According to a third aspect of the invention there is provided a method of implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the method comprising using any one, or a combination, of three basic steps for each test process, the basic steps comprising: Navigate, which comprises executing the steps necessary to move the test process to the next window;
Populate, which comprises entering test data into the GUI objects and/or activating other GUI objects on that window to achieve a specified objective; and
Verify, which comprises carrying out the checks necessary to confirm that the software application is behaving as expected. In an embodiment, the method comprises writing scripts for each software application to cater for the three defined basic steps. The scripts are dependent upon the nature of the development environment and the types of GUI object used. As indicated above, if this does not change then there is no need to amend the scripts. The scripts may be any form of computer program, compiled or interpreted, and may include procedures such as sub-routines, actions, functions, modules etc.
In an embodiment, the method comprises compiling at least one test data set, whether a test data sheet or sheets or a test data array, the scripts getting their data, including window names, object names, input data and expected results from at least one test data set.
ln an embodiment, a script for a Prototype Test comprises looping once through all three basic steps, for each window. In an embodiment, a script for a Regression Test comprises looping through all three basic steps for each window, and thereafter looping through all three basic steps again, for each window, using another test data set.
In an embodiment, a script for a Functional Test comprises looping through all three basic steps, one loop for each window for each function to be tested in a window, before moving onto the next window.
In a variation of the testing process, no navigation steps are required at a particular point or points in the test sequence.
In a further variation of the testing process, no population is required at a particular point or points in the test sequence.
In yet a further variation of the testing process, no verification is required at a particular point or points in the test sequence.
According to a fourth aspect of the invention there is provided a system for implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the system comprising any one, or a combination, of the following three basic modules for each test process: a Navigate module, which executes the steps necessary to move the test process to the next window;
a Populate module, which enters test data into the GUI objects and/or activates other GUI objects on that window to achieve a specified objective; and a Verify module, which carries out the checks necessary to confirm that the software application is behaving as expected.
Some of the terms used to describe the invention are defined in the International Software Testing Qualification Board's "Standard Glossary of terms used in Software Testing". In the text that follows, the term "window" applies equally to "screen", "page", "dialog box", "pop-up", "tab", "dumb screen" or similar entity in which GUI objects are placed. In essence, however, a GUI object is an element of a graphical user interface (GUI) that displays an information arrangement changeable by the user, such as a window or a text box.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 illustrates the program flow for a Prototype Test, in which the process navigates to the first window, populates the window if required and then verifies, for example, that there is no error message, with this process continuing for every window in the system;
Figure 2 illustrates the program flow for a Regression Test, in which the process navigates to the first window, populates the window with test data and then verifies whatever needs to be verified according to the test objective, with this process continuing until each window has been populated and verified, with this entire
process then being repeated as many times as is required using, for example, different data and/or different test objectives; illustrates the program flow for a Functional Test, in which the process navigates to the first window, populates the window with test data and then verifies whatever needs to be verified according to the test objective, with this process then continuing with further combinations of populate and verify for the window, with the process then moving onto the next window; illustrates an alternative flow for any of the above, where intermediate verification is required prior to completion of the populating step; shows some pseudo code, using a mask, for the populating step; shows some pseudo code which uses a counter instead of a mask, with each data field being preceded by a reference to a GUI object type; provides an example of test data used to drive the test, which makes use of a mask, and in particular test data for the populating step in a Prototype Test, with reference to Figure 5; provides an extract from a test data set which uses a counter instead of a mask, with reference to Figure 6; shows a list of possible GUI object types, their associated reference indicators and examples of the object types;
Figures 10 and 11 show portions of an example test results sheet.
Figure 12 shows a schematic flow diagram representing a method of testing a GUI based software application comprising at least one window having at least one GUI object; and
Figure 13 shows a schematic block diagram of a system for testing a GUI based software application comprising at least one window having at least one GUI object.
DETAILED DESCRIPTION OF THE DRAWINGS
RUMBA (which is an acronym for Rubric's Unified Method for Better Automation), which summarises the invention, is based on the premise that all test processes may be divided into sequences of three steps (each of which need not occur in every iteration), as follows:
1 . Navigate, which means to execute the steps necessary to move the process to the next window;
2. Populate, which means to enter test data into the GUI objects and/or activate other GUI objects on that window to achieve a specified objective; and
3. Verify, which means to carry out the checks necessary to confirm that the software is behaving as expected.
It is accepted that verification may also take place during the populate process. This process is then repeated over and over until all the testing objectives have been completed.
Three "styles" of test have been identified, as follows:
1 . Prototype Tests (usually called "sanity checks" or "health checks" in South Africa and "Smoke Tests" or "Intake Tests" in USA), which will be described in more detail below with reference to Figure 1 ;
2. Regression Tests, as described above, which will be described in more detail below with reference to Figure 2; and 3. Functional Tests, which will be described in more detail below with reference to Figure 3.
Turning now to Figure 1 , which illustrates the program flow 10 for a Prototype Test, the navigate 12, populate 14, verify 16 sequence is applied only once for each window or screen in the application. In particular, the program flow process 10 for the Prototype Test comprises navigating 12 to the first window and checking, for example, its name, and then reporting the pass/fail status, together with the actual and expected results (in the case of a fail) to the test results, which will be described in more detail further below with specific reference to Figures 10 and 1 1 .
The program flow process 10 then populates 14 that window (for example, with the minimum amount of data necessary to enable the process to continue) and then verifies 16, for example, that there is no error message. The result of this verification is recorded in the test results. The process continues for every window in the system, as indicated by arrow 18. If there is a fatal error, the system re-starts the AUT (Application Under Test) and continues with the next window. Turning now to Figure 2, for a Regression Test 20 there is typically a reduced sequence of screens i.e. not all of them, which are needed to complete a
specific process and this sequence is repeated numerous times with different test data sets of test data. The Regression Test process 20 navigates to the first window, as indicated by block 22, populates that window with test data, as indicated by block 24, and then verifies whatever needs to be verified according to the test objective, as indicated by block 26. The result of this verification is recorded in the test results. The process then continues to the next screen, as indicated by arrow 28 in the process being regression tested until all the screens in the process have been populated and verified. The whole process is then repeated as many times, for example, with different data and/or different test objectives as indicated by arrow 30.
Turning now to Figure 3, the Functional Test process 40 is a variant of the above where multiple populate/verify sequences occur for each screen before navigation to the next screen. In particular, the program flow 40 for a Functional Test comprises navigating to the first window, as indicated by block 42, populating that window with test data, as indicated by block 44, and then verifying whatever needs to be verified according to the test objective, as indicated by block 46. The result of this verification is recorded in the test results. The process then continues with further combinations of populating and verifying for that window, as indicated by arrow 48. Eventually it moves to the next window and the process is repeated, for example, with different data and/or different test objectives as indicated by arrow 50.
The same navigate, populate and verify steps may be used in each of the three styles of test listed or any other style that may be created using this method, with each of these steps being described further below with reference to navigate, populate and verify modules. All of the above techniques have been applied in multiple environments (and proven to work) using the Hewlett Packard (previously Mercury Interactive Corporation) test automation tool known as QuickTest Professional (and more recently, Unified Functional Testing). The method has also been shown to work using IBM's (RFT) Rational
Functional Testing tool. The same techniques may be applied to any other tool with the appropriate programming capabilities.
Navigate module
The navigate module may invoke the application. It may then use a mask in a data table, similar to that in Figures 5 and 7, to determine how many steps are required to reach each Window and which GUI objects are required to do so. Alternatively, a counter may be used together with reference to the object type in each data field, as shown in more detail in Figures 6 and 8. Thus, instead of using a mask to determine which types of GUI object are to be activated, a counter or a reference to the object associated with the data may be used.
Alternatively, the number of steps required may be assessed programmatically. A Case statement may be used to identify the action required for the appropriate GUI object. For example, Case "1 " in Figure 5 refers to an Edit field, Case "2" refers to a Button, Case "3" refers to a List etc.
Obviously, if the AUT is positioned at the desired window already, this module will not execute any steps.
Populate module
After the navigate module has run, the populate module is then executed. This populates the window with the test data that is stored in a sheet of the data table (such as the test data table 58 in Figure 7). In the Prototype Test implementation (i.e. Figure 1 ), the input test data 60 (in particular, 60.1 , 60.2, 60.3 and 60.4 in Figure 7) and the GUI object name 62 (in particular, 62.1 , 62.2, 62.3 and 62.4 in Figure 7) are stored in pairs of columns in the data table 58. For example, in the Flights window 64, the GUI object name "Date" has associated input test data "1 1/1 1/1 1". In other implementations, the GUI object names may be stored in a separate sheet to avoid duplication.
Referring now to Figure 9, a list of possible GUI object types 70 (70.1 to 70.9), their associated reference indicators 72 (72.1 to 72.9) and examples of the object types 70 is shown.
Turning back to Figure 7, the test data table 58 includes a column 66 that provides the developer's name for a particular window, e.g. Flights 64, Fax and Preview. The table 58 further includes a mask 68 for each window 66, the mask determining which type of GUI object 62 is activated and in which order. For example, in the Flights window 64, the mask is 4/3/3/2, which means that the test designer wishes to populate a special edit field (reference indicator 4, with reference to Figure 9), then two list objects (two reference indicators 3, again with reference to Figure 9), and then a push-button (reference indicator 2, with reference to Figure 9). Each line in the data table 58 represents the sequence of actions for a specific window. The data exclusively drives the order of events.
As shown in Figures 5 and 6, a Case statement is used to identify the action required for the appropriate GUI object. The input data is stored in the same order. Where a GUI object does not need any data, for example, a push button, the relevant data table cell is left blank (as confirmed by the blank space in column 60.4 in the Flights window 64 in Figure 7). The populate step may include checkpoints, synchronization points, functions and data output commands. In an embodiment, these are identified by using letter reference indicators instead of number reference indicators in the mask 66. That is, numbers are used to identify normal GUI input actions and letters identify these alternative commands. Where letters are used, the object data sheet identifies the object and the populate data sheet identifies the expected result (for a verification). The data output commands may use a separate data sheet for ease of administration. The output data may be used for future inputs or
future expected results. One line in the data table may be used for each window.
Figure 5 shows some of the pseudo code for the populate sequence. The code shown divides the mask into individual components and counts how many components there are (as described above with reference to the example shown in Figure 7). It then activates the appropriate GUI object on the window with the appropriate data 60 from the data table 58. The example illustrates a Microsoft Windows based application, but the method may be applied to any other GUI environment including legacy systems with "dumb" terminals.
Figure 6 shows some pseudo code where, instead of using a mask, each data field may be preceded by a reference to a GUI object type, for example, Mdata, where 1 indicates that the GUI object is, with reference to Figure 9, an edit field. In this case a counter may identify the number of objects to be used in any specific window. Similarly, where separate object and data tables are used, the "1/" in the data column may refer to the position of the object in the object table. Instead of a ", a vertical character (i.e. T) may be used, as shown in Figure 8.
Verify module
The verify module may also be structured like the navigate and populate modules, that is, using a mask (or counter and references) and a case statement. The verifications required in the populate module may be handled in the verify module. Since this is the module where the checking of the status of the AUT is carried out, this module may also contain the related facilities of recovery from serious failures of the AUT and/or the recording, for example, in the Smoke Test, of any unexpected changes that are detected in the AUT. As described above, the mask is based on the concept that each GUI object has a code number (or reference indicator) assigned to it. For example: 1 for
an edit field; 2 for a button; 3 for a list and so on (again with reference to Figure 9). They are placed in the appropriate column, separated, for example, by a " or T character, by the Test Analyst using this method. Alternatively, a counter may be used together with reference to the object type in each data field. Such reference may follow the convention illustrated in the preceding sentence e.g. 1 for an edit field etc. Alternatively, this may be determined programmatically.
It is envisaged that in each of the above modules all references to specific window names, object names and test data values may be replaced by references to a cell in a test data sheet (such as test table 58). This enables the test script to remain unchanged when changes take place in the AUT except in the unusual circumstance of a new different type of GUI object being added to the AUT. Even under those circumstances all that needs to be done, for example to the populate module, is to allocate a new code number to the new GUI object type and add a new section in the case statement to handle it. The implementer of this method may decide to allow specific exceptions to this intention, for example, where they thought that the application is unlikely to change. An example of such an exception might be in the hard-coding of the error handling mechanism of the AUT where it may be assumed that such mechanism was unlikely to change with forthcoming releases. It is the recommendation of this method that where such exceptions occur they are highlighted in the script, for example, by means of comments and/or documented in the associated documentation describing the implementation of the method.
Turning now to Figures 10 and 1 1 , the test results summary (of a regression test) is shown. The left window 80 of the test results summary lists the various windows 82 that the test script was programmed to run through. The ticked sections, such as those represented by arrows 84 and 86, confirm that the relevant windows/processes operated as expected. Conversely, the crossed sections, such as that shown by arrow 88 in Figure 1 1 , indicate that the
associated test step has failed but the script has recovered and continued onto the next step.
Referring back to Figure 10, the summary window 90 on the right of the screen confirms that after 1 iteration 92, the AUT, overall, failed 94. In particular, there were 29 passes 96 (i.e. ticked sections), 2 fails 98 and 8 warnings 100.
Turning back to Figure 1 1 , by highlighting the failed window/process 88, the window 102 displays certain information of the fail. In this particular case, the total value of the tested transaction came to $354, whereas according to the verification data in the test data set (e.g. the table in Figure 7), this amount should have been $214.94, as indicated by arrow 104. In this manner, the tester can easily see which section of the AUT needs to be examined and amended.
Although already substantially described, for the sake of completeness, Figure 12 shows a method 120 of testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object. The method 120 comprises providing a test script for the GUI based software application to be tested, as indicated by block 122. The method 120 further comprises providing test data and verification data for the GUI based software application that can be accessed by the test script, as indicated by block 124. The method 120 then executes a test process controlled by the test script, as indicated by block 126. The test process comprises navigating to each window, as indicated by block 128, which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin, and/or populating each window, as indicated by block 130, which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective, and/or verifying the test process by carrying out the checks necessary to
confirm that the software application is behaving as expected with reference to the verification data, as indicated by block 132.
As described above, with particular reference to Figure 7, the method 120 comprises providing a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable. In an embodiment, the method 120 comprises defining a reference indicator for each GUI object type used in the GUI based software application.
In an embodiment, the method 120 comprises determining the number of GUI objects per window.
In one version, the method of determining the number of GUI objects per window comprises using a mask, which can be provided in the test data set, to indicate the number of GUI objects per window, the mask comprising at least one reference indicator.
In this version, if there are a plurality of GUI objects per window, the reference indicators are separated from each other using a predetermined symbol.
In an alternate version, the method of determining the number of GUI objects per window comprises using a counter, which can be provided in the test data set, to indicate the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type. Conveniently, the method 120 comprises allowing a software tester to amend the test data set by:
adding a new GUI object and associated GUI object type to an existing window, should the GUI software application being tested require a new GUI object in an existing window; and/or adding a new window having at least one GUI object and associated GUI object type, should the GUI software application being tested require a new GUI object in an existing window; and/or editing the test data to confirm that the software application behaves as expected with different test data; and/or changing the GUI object type for a particular GUI object to another GUI object type, thereby allowing the software application to be tested in many different ways without having to edit the test script.
In an embodiment, the method 120 comprises logging the test results and indicating, for each verification, whether it passed, in which case the software application is behaving as expected, or whether it failed, in which case the software application did not behave as expected.
Turning now to Figure 13, and although also already substantially described, a system 140 is shown for testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object. The system 140 comprises memory 142 to store a test script for the GUI based software application to be tested, memory 144 to store test data and verification data for the GUI based software application that can be accessed by the test script, and a processor 146 to execute a test process controlled by the test script. The processor 146 is programmed to execute a navigate
module 148 for navigating to each window and executing the steps necessary to move the test process to the point from which the test process will begin, and/or execute a populate module 150 for populating each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective, and/or execute a verify module 152 for verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data. In an embodiment, the memory 144 stores a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
In one version, the test data set comprises a mask to enable the processor to determine the number of GUI objects per window, the mask comprising at least one reference indicator, each reference indicator being associated with a GUI object type used in the GUI based software application.
In this version, if there are a plurality of GUI objects per window, the reference indicators in the mask are separated from each other using a predetermined symbol. In an alternate version, the test data sheet comprises a counter to enable the processor to determine the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type. In yet a further version, the test script itself determines how many GUI objects there are in the test data.
Some implementations of this method may employ a Metrics Interface to a Test Management System. It is normal when a Test Management System is used to execute an automated regression test to find that the Test Management System provides unhelpful metrics regarding the execution of such a test, for example, that one test has been run and that it has failed, despite the fact that the regression test carried out numerous iterations and verifications. Most implementations of this method have used a direct (Application Program Interface) link to the Test Management System creating dummy test instances for each iteration or verification that is carried out and providing a pass, fail or other status for such a test instance. Similarly, the test requirement(s) that each verification/iteration has been created to meet has its corresponding status updated after execution. Other data may also be provided.
It is the intention of this method that documentation be provided for each implementation since there will be minor variation from implementation to implementation. This documentation should numerate the codes for the various masks together with any exceptions. This invention overcomes the problems listed in the Background to the Invention section above because:
1 . The scripts require virtually no maintenance.
2. The scripts contain no hardcoded data.
3. The scripts contain no screen, page or object names.
4. The scripts are entirely data driven.
5. As a result, the scripts may be used, maintained, applied and enhanced by relatively unskilled personnel.
6. The technique is tool independent, provided that the tool has specific basic features.
The present invention thus provides an innovative method of breaking any automated test script into three relatively small components that may be executed repeatedly with different data to test any GUI based software application. The method may include no record/playback components. The method may use a mask, a counter or programmatic steps to determine which types of GUI object are to be activated. The method may be used for intake tests, smoke tests, regression tests and functional tests. The method may include the ability to recover from high severity failures. The method may include reports on changes that have been made to the software's GUI. The method may interface to a Test Management system to provide metrics.
Where the method includes such an interface with a Test Management System, an embodiment includes a script that converts the test design steps and test data from that Test Management System automatically into the (RUMBA) data sheets, thereby obviating the need for the user to create them manually. However, it should be noted that not all data is created automatically i.e. some still may have to be added manually e.g. error recovery data. In an embodiment, other scripts are added to facilitate the creation of the data tables. This might include a script that automatically extracts object data into the object data sheet or one that uses the information in the object data sheet to facilitate the user's entry of data into the other datasheets.
Claims
1 . A method of testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object, the method comprising: providing a test script for the GUI based software application to be tested; providing test data and verification data for the GUI based software application that can be accessed by the test script; executing a test process controlled by the test script, comprising the steps of: navigating to each window which in turn comprises executing the steps necessary to move the test process to the point from which the test process will begin; and/or populating each window which in turn comprises entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verifying the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
2. The method of claim 1 , which comprises providing a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
The method of claim 2, which comprises defining a reference indicator for each GUI object type used in the GUI based software application.
The method of either claim 3, which comprises determining the number of GUI objects per window.
The method of claim 4, wherein the method of determining the number of GUI objects per window comprises using a mask, which can be provided in the test data set, to indicate the number of GUI objects per window, the mask comprising at least one reference indicator.
The method of claim 5, wherein, if there are a plurality of GUI objects per window, the reference indicators are separated from each other using a predetermined symbol.
The method of claim 4, wherein the method of determining the number of GUI objects per window comprises using a counter, which can be provided in the test data set, to indicate the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
The method of any one of claims 2 to 7, which allows a software tester to amend the test data set by: adding a new GUI object and associated GUI object type to an existing window, should the GUI software application being tested require a new GUI object in an existing window; and/or adding a new window having at least one GUI object and associated GUI object type, should the GUI software application being tested require a new GUI object in an existing window; and/or editing the test data to confirm that the software application behaves as expected with different test data; and/or changing the GUI object type for a particular GUI object to another GUI object type, thereby allowing the software application to be tested in many different ways without having to edit the test script.
The method of any one of the preceding claims, which comprises logging the test results and indicating, for each verification, whether it passed, in which case the software application has behaved as expected, or whether it failed, in which case the software application did not behave as expected.
The method of any one of the preceding claims, wherein the test script comprises a script for a Prototype Test that comprises looping once through all three steps of the test process, namely navigating, populating and verifying, for each window.
The method of any one of the preceding claims, wherein the test script comprises a script for a Regression Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each window, and thereafter looping through all three steps again, for each window, using another test data set.
The method of any one of the preceding claims, wherein the test script comprises a script for a Functional Test that comprises looping through all three steps of the test process, namely navigating, populating and verifying, for each function to be tested in a window, before moving onto the next window.
The method of claim 12, wherein the Functional Test test script comprises a plurality of populating and verifying steps for a window.
The method of any one of the preceding claims, wherein in a variation of the test process, no navigating step is required at a particular point or points in the test sequence.
The method of any one of the preceding claims, wherein in a variation of the test process, no populating step is required at a particular point or points in the test sequence.
The method of any one of the preceding claims, wherein in a variation of the test process, no verification is required at a particular point or points in the test sequence.
A system for testing a GUI based software application comprising at least one window, with each window comprising at least one GUI object, the system comprising: memory storing a test script for the GUI based software application to be tested; memory storing test data and verification data for the GUI based software application that can be accessed by the test script; and a processor to execute a test process controlled by the test script, the processor being programmed to: navigate to each window and executing the steps necessary to move the test process to the point from which the test process will begin; and/or populate each window by entering test data into the GUI objects and/or activating other GUI objects in the window to achieve a specified objective; and/or verify the test process by carrying out the checks necessary to confirm that the software application is behaving as expected with reference to the verification data.
The system of claim 17, which comprises a test data set, comprising a data sheet or a plurality of data sheets, specifying, for each window, a window name, at least one GUI object and associated GUI object type, test data for each GUI object, if applicable, and verification data for each GUI object, if applicable.
The system of claim 18, wherein the test data set comprises a mask to enable the processor to determine the number of GUI objects per window, the mask comprising at least one reference indicator, each reference indicator being associated with a GUI object type used in the GUI based software application. The system of claim 19, wherein, if there are a plurality of GUI objects per window, the reference indicators in the mask are separated from each other using a predetermined symbol.
The system of claim 18, wherein the test data sheet comprises a counter to enable the processor to determine the number of GUI objects per window, wherein the test data in the test data set comprises at least one reference indicator to indicate the GUI object type.
A method of implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the method comprising using any one, or a combination, of three basic steps for each test process, the basic steps comprising:
Navigate, which comprises executing the steps necessary to move the test process to the point from which the test process will begin;
Populate, which comprises entering test data into the GUI objects and/or activating other GUI objects on that window to achieve a specified objective; and
Verify, which comprises carrying out the checks necessary to confirm that the software application is behaving as expected.
The method of claim 22, which comprises writing scripts for each software application to cater for the three defined basic steps, the scripts being dependent upon the nature of the development environment and the types of GUI object used. The method of either claim 22 or claim 23 which comprises compiling at least one test data set, whether a test data sheet or sheets or a test data array, the scripts getting their data, including window names, object names, input data and expected results from the at least one test data set.
The method of any one of the preceding claims 22 to 24, wherein a script for a Prototype Test comprises looping through all three basic steps, one iteration for each window.
The method of any one of the preceding claims 22 to 25, wherein a script for a Regression Test comprises looping through all three basic steps, one loop for each window and one iteration for each set of data needed for the process.
The method of any one of the preceding claims 22 to 26, wherein a script for a Functional Test comprises looping through all three basic steps, one loop for each window for each function to be tested on a screen, and then moving on to the next screen where the process is repeated.
A system for implementing a test automation framework for GUI based software applications comprising at least one window, with each window comprising at least one GUI object, the system comprising any one, or a combination, of the following three basic modules for each test process: a Navigate module, which executes the steps necessary to move the test process to the next window; a Populate module, which allows test data to be entered into the fields and/or the interaction with GUI objects on that window to achieve a specified objective; and a Verify module, which carries out the checks necessary to confirm that the software application is behaving as expected.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| ZA201008582 | 2010-11-30 | ||
| ZA2010/08582 | 2010-11-30 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2012073197A1 true WO2012073197A1 (en) | 2012-06-07 |
Family
ID=46171257
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IB2011/055376 Ceased WO2012073197A1 (en) | 2010-11-30 | 2011-11-30 | Methods and systems for implementing a test automation framework for gui based software applications |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2012073197A1 (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014107760A1 (en) * | 2013-01-08 | 2014-07-17 | Enov8 Data Pty Ltd | Systems and methods for managing the utilisation of test data |
| WO2014186429A1 (en) * | 2013-05-15 | 2014-11-20 | Microsoft Corporation | Automatic discovery of system behavior |
| CN114265780A (en) * | 2021-11-26 | 2022-04-01 | 中国银行股份有限公司 | Method, system, equipment and storage medium for testing report system |
| WO2022168080A1 (en) * | 2021-02-08 | 2022-08-11 | Walkme Ltd. | Automated testing of walkthroughs |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020091968A1 (en) * | 2001-01-08 | 2002-07-11 | Donald Moreaux | Object-oriented data driven software GUI automated test harness |
| US7093238B2 (en) * | 2001-07-27 | 2006-08-15 | Accordsqa, Inc. | Automated software testing and validation system |
| US20070061625A1 (en) * | 2005-09-15 | 2007-03-15 | Acosta Juan Jr | Automation structure for software verification testing |
| US7290245B2 (en) * | 2001-10-18 | 2007-10-30 | Microsoft Corporation | Methods and systems for navigating deterministically through a graphical user interface |
| WO2008045117A1 (en) * | 2006-10-06 | 2008-04-17 | Nielsen Media Research, Inc. | Methods and apparatus to analyze computer software |
| US20090320002A1 (en) * | 2008-06-20 | 2009-12-24 | Cadence Design Systems, Inc. | Method and system for testing and analyzing user interfaces |
-
2011
- 2011-11-30 WO PCT/IB2011/055376 patent/WO2012073197A1/en not_active Ceased
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20020091968A1 (en) * | 2001-01-08 | 2002-07-11 | Donald Moreaux | Object-oriented data driven software GUI automated test harness |
| US7093238B2 (en) * | 2001-07-27 | 2006-08-15 | Accordsqa, Inc. | Automated software testing and validation system |
| US7290245B2 (en) * | 2001-10-18 | 2007-10-30 | Microsoft Corporation | Methods and systems for navigating deterministically through a graphical user interface |
| US20070061625A1 (en) * | 2005-09-15 | 2007-03-15 | Acosta Juan Jr | Automation structure for software verification testing |
| WO2008045117A1 (en) * | 2006-10-06 | 2008-04-17 | Nielsen Media Research, Inc. | Methods and apparatus to analyze computer software |
| US20090320002A1 (en) * | 2008-06-20 | 2009-12-24 | Cadence Design Systems, Inc. | Method and system for testing and analyzing user interfaces |
Non-Patent Citations (1)
| Title |
|---|
| MEMON ET AL.: "Automated Test Oracles for GUIs", ACM SIGSOFT SOFTWARE ENGINEERING NOTES, vol. 25, no. ISS.6, 2000, pages 30 - 39 * |
Cited By (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2014107760A1 (en) * | 2013-01-08 | 2014-07-17 | Enov8 Data Pty Ltd | Systems and methods for managing the utilisation of test data |
| WO2014186429A1 (en) * | 2013-05-15 | 2014-11-20 | Microsoft Corporation | Automatic discovery of system behavior |
| CN105264492A (en) * | 2013-05-15 | 2016-01-20 | 微软技术许可有限责任公司 | Automatic discovery of system behavior |
| US9395890B2 (en) | 2013-05-15 | 2016-07-19 | Microsoft Technology Licensing, Llc | Automatic discovery of system behavior |
| CN105264492B (en) * | 2013-05-15 | 2018-12-07 | 微软技术许可有限责任公司 | Automatic Discovery of System Behavior |
| WO2022168080A1 (en) * | 2021-02-08 | 2022-08-11 | Walkme Ltd. | Automated testing of walkthroughs |
| US11520690B2 (en) | 2021-02-08 | 2022-12-06 | Walkme Ltd. | Automated testing of walkthroughs |
| CN114265780A (en) * | 2021-11-26 | 2022-04-01 | 中国银行股份有限公司 | Method, system, equipment and storage medium for testing report system |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Hammoudi et al. | Why do record/replay tests of web applications break? | |
| US8881105B2 (en) | Test case manager | |
| US7752501B2 (en) | Dynamic generation and implementation of globalization verification testing for user interface controls | |
| US6948152B2 (en) | Data structures for use with environment based data driven automated test engine for GUI applications | |
| US8239835B2 (en) | Automated software testing framework using independent test scripts | |
| CN112732579B (en) | Automatic test method and system for WebUI | |
| CN102799428A (en) | Operation recording and playback method for interactive software | |
| WO2016015220A1 (en) | Executable code abnormality detection | |
| CN103678116A (en) | Method and system for facilitating automated program testing | |
| WO2012073197A1 (en) | Methods and systems for implementing a test automation framework for gui based software applications | |
| Koesnandar et al. | Using assertions to help end-user programmers create dependable web macros | |
| Salam et al. | Advanced framework for automated testing of mobile applications | |
| Andrea | Envisioning the next-generation of functional testing tools | |
| Siochi et al. | WebWolf: Towards a simple framework for automated assessment of webpage assignments in an introductory web programming class | |
| WO2007118271A1 (en) | A method and system and product for conditioning software | |
| US10579761B1 (en) | Method and system for reconstructing a graph presentation of a previously executed verification test | |
| Liu et al. | Repodebug: Repository-level multi-task and multi-language debugging evaluation of large language models | |
| Pietron et al. | A study design template for identifying usability issues in graphical modeling tools. | |
| Patidar et al. | Survey on manual and automation testing strategies and tools for software application | |
| Nagowah et al. | AsT-A simple automated system testing tool | |
| Arnold et al. | Professional Software Testing with Visual Studio 2005 Team System | |
| Van Deursen et al. | Software quality and testing | |
| TWI824984B (en) | Programming scoring system and scoring method thereof | |
| Queirós | Webpuppet-a tiny automated web ui testing tool | |
| Takahashi et al. | Learning Status Report Tool for Programming Learning Services |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11844579 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11844579 Country of ref document: EP Kind code of ref document: A1 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 11844579 Country of ref document: EP Kind code of ref document: A1 |