[go: up one dir, main page]

CN115168226A - Software test case automatic execution method based on data and keyword drive - Google Patents

Software test case automatic execution method based on data and keyword drive Download PDF

Info

Publication number
CN115168226A
CN115168226A CN202210884562.1A CN202210884562A CN115168226A CN 115168226 A CN115168226 A CN 115168226A CN 202210884562 A CN202210884562 A CN 202210884562A CN 115168226 A CN115168226 A CN 115168226A
Authority
CN
China
Prior art keywords
test
data
keywords
test case
keyword
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210884562.1A
Other languages
Chinese (zh)
Inventor
吴翔虎
陶永超
刘颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Shenzhen
Original Assignee
Harbin Institute of Technology Shenzhen
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Shenzhen filed Critical Harbin Institute of Technology Shenzhen
Priority to CN202210884562.1A priority Critical patent/CN115168226A/en
Publication of CN115168226A publication Critical patent/CN115168226A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3668Testing of software
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Prevention of errors by analysis, debugging or testing of software
    • G06F11/3698Environments for analysis, debugging or testing of software

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

基于数据与关键字驱动的软件测试用例自动执行方法,具体涉及基于数据与关键字驱动的嵌入式软件测试用例自动执行方法,为解决由于嵌入式软件的复杂性导致嵌入式软件的测试用例测试准确率不高,测试效果不好的问题,包括建立模型工程;初始化系统资源,生成具体测试数据;建立软件测试模型;对测试模型进行约束设计;将关键字与测试模型绑定;配置覆盖策略,利用绑定关键字后的测试模型和具体测试数据生成测试用例;对生成的测试用例进行筛选、确认,对得到的测试用例整合作为测试用例执行集;将测试用例执行集与测试环境绑定,绑定成功后用每个测试环境遍历测试用例执行集;利用绑定的关键字驱动每个测试用例自动化执行。属于图形化建模领域。

Figure 202210884562

An automatic execution method of software test cases driven by data and keywords, specifically related to an automatic execution method of embedded software test cases driven by data and keywords, in order to solve the problem of accurate test case testing of embedded software due to the complexity of embedded software The problems of low rate and poor test effect include building model projects; initializing system resources and generating specific test data; establishing software test models; designing constraints on test models; binding keywords to test models; configuring coverage strategies, Use the test model and specific test data after binding keywords to generate test cases; screen and confirm the generated test cases, and integrate the obtained test cases as the test case execution set; bind the test case execution set to the test environment, After the binding is successful, use each test environment to traverse the test case execution set; use the bound keywords to drive the automatic execution of each test case. It belongs to the field of graphical modeling.

Figure 202210884562

Description

Software test case automatic execution method based on data and keyword drive
Technical Field
The invention relates to an automatic execution method of a software test case, in particular to an automatic execution method of an embedded software test case based on data and keyword driving, and belongs to the field of graphical modeling.
Background
With the rapid development of computer technology, the complexity of embedded software is higher and higher, the quality requirement on the software is stricter and more, and a tiny error can cause huge loss, so how to improve the testing efficiency of the embedded software has been put forward.
The model-based test is a test method belonging to the field of software test. According to the method, the test case of the software can be completely or partially automatically generated by using the model, and the workload of a tester is greatly reduced. In the work of testing embedded software, the workload of preparing test data is huge, and the work difficulty is also huge, so that in order to ensure the accuracy and the integrity of the test case of the embedded software, the generation of the test case driven by the test data is a good method, and the method for generating the test case driven by the test data is disclosed in the software test case generation method, system and storage medium based on data driving and multiple coverage strategies, which is disclosed in 2020100906350, but due to the complexity of the embedded software, in order to reduce human intervention as much as possible, more efficient and reliable testing is realized, and the automatic execution of the test case is also concerned. If the test case is to realize automatic execution, the internal execution logic of the test case must be obtained, which has extremely high requirements on testers and can increase the test cost and the test risk; under the condition that a tester does not know internal execution logic, the problem of completing the automatic execution of the test case becomes the present stage.
Therefore, how to realize the automatic execution of the embedded software test case becomes the biggest problem in the present stage.
Disclosure of Invention
The invention provides a software test case automatic execution method based on data and keyword driving, aiming at solving the problems of low test accuracy and poor test effect of a test case of embedded software caused by the complexity of the embedded software.
The technical scheme adopted by the invention is as follows:
it comprises the following steps:
s1, establishing a model project;
s2, initializing system resources of the model engineering, and generating specific test data according to the initialized system resources;
s3, establishing a test model of the software according to the established model engineering and the generated specific test data;
s4, carrying out constraint design on the test model;
s5, binding keywords for driving the test case to automatically execute with the test model;
s6, configuring a coverage strategy of the test case, and generating the test case with a fixed format according to specific test data by using the test model obtained in the S5 after the keywords are bound;
s7, screening and confirming the test cases with fixed formats generated in the S6, and integrating the test cases obtained after screening and confirming to be used as a test case execution set;
s8, binding and connecting the test case execution set with the test environments, and traversing the test case execution set by using each test environment after the binding is successful;
and S9, driving each test case in the test case execution set to be automatically executed by using the keywords bound in the S5.
Preferably, the system resources of the model engineering are initialized in S2, and specific test data is generated according to the initialized system resources; the specific process is as follows:
s21, the system resources comprise data, time constraint and test environment of test cases;
setting parameters of data, wherein the parameters comprise data format, data type, data precision and data range;
the data format comprises single data, structural data and aggregate data;
s22, generating specific test data according to the parameters of the data;
the generation of the specific test data comprises the following four conditions:
setting a value taking domain of data, and respectively setting a valid class of the data and an invalid class of the data;
II, automatically sampling and generating a data effective value and a data invalid value according to an algorithm of an equivalence class and a boundary value through a set data value taking domain or a set identification logic operation expression;
III, generating continuous data;
and IV, generating time constraint by adopting a five-value method, wherein the five-value method comprises a left boundary-precision, a left boundary, a middle value, a right boundary and a precision value.
Preferably, the test model established in S3 is a nestable activity graph;
the graphic elements of the activity graph comprise action nodes, action streams, starting nodes, ending nodes, judging nodes, timer nodes, concurrency starting nodes and concurrency ending nodes.
Preferably, the test path of the test model established in S3 includes:
(1) Common path: no loop, no judgment node and no concurrent path exist;
(2) Circulation path: a looped path;
(3) Concurrent path: the method comprises the following steps that two or more activities are executed in the same time interval;
(4) The constraint function expression: and performing logic operation, arithmetic operation, time constraint and data operation and different types of time constraint operation on the judgment nodes of the test model.
Preferably, the constraint design in S4 includes a data constraint design, preset conditions, time constraints, and a sequential logic relationship.
Preferably, the keywords in S5 include a keyword function name, parameters of the keyword function, and a type of the keyword function; the types of the keyword functions comprise service layer keywords, logic layer keywords and execution layer keywords.
Preferably, in S5, the keyword for driving the test case to automatically execute is bound to the test model, and the specific process is as follows:
and binding each primitive of the test model activity diagram with one or more keywords, wherein the keywords bound by the same primitive are separated by half-angle semicolons.
Preferably, in S9, each test case in the test case execution set is driven to be automatically executed by using the keywords bound in S5, and the specific process is as follows:
s91, driving each test case in the test case execution set to generate a corresponding test script by using the keywords bound in the S5, wherein the test script is used for calling a function of the keywords corresponding to the test case and displaying parameter values of the function;
s92, sending the test script to a corresponding test environment in the test system;
s93, automatically executing a test script by a test environment;
s94, the test system records and dynamically displays the execution condition of the test script in real time, and the execution condition of the corresponding test case can be obtained;
s95, after the execution is finished, the test environment feeds back the execution result to the test system;
and S96, analyzing and updating the execution result of the test script by the test system, and outputting the execution result.
Preferably, in S91, each test case in the test case execution set is driven by using the keywords bound in S5 to generate a corresponding test script, and the specific process is as follows:
s911, analyzing the keyword function;
s912, outputting the analyzed keywords according to a front-back sequence to test cases;
s913, assigning a plurality of values to the output parameters of the test data of the continuous frame type;
and S914, dereferencing the parameter values of the keywords as dereferences of corresponding test data, and outputting the keywords without dereferences in an original state to obtain the test script.
Preferably, the parsing of the keyword function in S911 includes the following three parts:
(1) Testing key operations to be executed, namely some key actions of actual operations;
(2) An operated object to be triggered by the key action, namely an operated control element;
(3) Data that must be provided for testing, i.e., test data.
Has the advantages that:
firstly, establishing a test model for embedded software, carrying out constraint design on the test model, and binding keywords for driving a test case to automatically execute with the test model, wherein the test model is a nestable active graph, each primitive of the active graph is bound with one or more keywords, and the keywords bound by the same primitive are separated by a half-angle semicolon; the keywords comprise keyword function names, parameters of the keyword functions and types of the keyword functions; the types of the key word functions comprise service layer key words, logic layer key words and execution layer key words. Generating test cases based on the data and various coverage strategies, screening and confirming that the generated test cases are bound with the test environments, and traversing the test case execution set by each test environment after the binding is successful; finally, the keywords are used for driving the test cases to generate the test scripts, each test case corresponds to one test script, and the test system executes the test scripts and feeds the result back to the test system in real time, so that automatic test execution of the test cases is achieved, the test accuracy of the test cases is improved, and a good test effect is obtained. The test is driven by the keywords, the test cases are separated from the actual test data, the requirement on the code capability of a tester is not high, and the operation is simple; the testing personnel only need to select the test case, and the distributed execution of the test case can be realized by binding the corresponding testing environment, so that the working efficiency of the testing personnel is improved, and the testing cost and risk are reduced.
In the invention, when the application to be tested changes, a tester only needs to modify the corresponding key words and the specific logic thereof to realize the test, and the test data and the test logic are completely separated, so that the test case and the test script thereof become easy to maintain, and the test becomes more efficient.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
Detailed Description
The first embodiment is as follows: the embodiment is described with reference to fig. 1, and the method for automatically executing software test cases based on data and keyword driving in the embodiment includes the following steps:
s1, establishing a model project;
the model project comprises a file of the model and data of the model;
the model engineering is actually established by establishing a working space for a test project, when the embedded software model engineering is established, firstly naming the test project, determining the actual storage position of the project, and then establishing a model file and model data of the project by a system. The model files comprise physical files such as a test document folder, a picture folder and a test model file, wherein the test model file is an entry file for a tester to open a model editor and read and write the contents of the model. The model data refers to the actual content of the model and the unified storage data of information possibly used in the project, and can be realized in the form of a database or XML and the like.
S2, initializing system resources of the model engineering, and generating specific test data according to the initialized system resources; the specific process is as follows:
the system resources include data, time constraints, and test environments for test cases.
Initializing all system resources of the model engineering, and generating specific test data about the embedded software according to the initialized system resources; the time constraint is a constraint with a given time range.
S21, setting parameters of data, wherein the parameters comprise a data format, a data type, data precision and a data range, and the data format comprises single data, structural data and aggregate data;
individual data: a single physical meaning of a datum; structural data: the system consists of a plurality of data with different attributes, and the data are usually collected, calculated and processed simultaneously; collecting data: the method is characterized by comprising a plurality of physical data, wherein each physical data does not have too much association, is simply combined, and can be processed together or separately. Data needs to be able to express single data, structured data and aggregate data.
S22, generating specific test data according to the parameters of the data;
the generation of the specific test data comprises the following four conditions:
setting a value taking domain of data, and respectively setting a valid class of the data and an invalid class of the data;
II, automatically sampling and generating a data effective value and a data invalid value according to an algorithm of an equivalence class and a boundary value through a set data value taking domain or a simple identified logical operation expression;
the specific algorithm description is shown as algorithm a.
The algorithm name is: algorithm a
Inputting: data of
And (3) outputting: specific test data
The process is as follows: acquiring parameters of data, namely a data structure, and judging whether the data is single data or multiple data; for a single datum, covering each equivalence class of the datum and boundary values (7-value method, 9-value method); for multiple data, firstly determining the value of single data, and then carrying out combined test (EC, 2-wise,3-wise, n-wise and the like) on the multiple data; and finally, obtaining specific test data and storing the specific test data.
III, generating continuous data; continuously receiving certain data, wherein the data can be unchanged, but most of the cases are that the data is changed in real time;
and IV, generating time constraint by adopting a five-value method, wherein the five-value method comprises a left boundary-precision, a left boundary, a middle value, a right boundary and a precision value.
S3, establishing a test model of the software according to the established model engineering and the generated specific test data; the software is embedded software.
The established test model is a nestable activity diagram and supports software service coverage of basic service, cyclic processing and concurrent service.
The primitives of the activity graph comprise action nodes, action streams, a start node, an end node, a judgment node, a timer node, a concurrency start node and a concurrency end node, and the action nodes support subgraph nested modeling.
The embedded software system is subjected to test requirement modeling by adopting a formal expression mode of an activity diagram, and then the model is subjected to constraint design and binding of the driving test automatic execution keywords.
The test path of the test model comprises:
common path: there is no loop, no judgment node, no concurrent path.
Circulation path: a path with a loop (loop).
Concurrent path: meaning that two or more activities are performed within the same time interval. For some complex large systems, objects often have more than one control flow at runtime, but two or more concurrently running control flows.
The constraint function expression: logic operation, arithmetic operation, time constraint and data operation and different types of time constraint operation can be carried out on the judgment nodes of the model.
And S4, carrying out constraint design on the test model.
The constraint design comprises a data constraint design, preset conditions, time constraints and a time sequence logic relation.
Sub-graphs of the same type can be nested in an activity graph of the test model, namely the activity graph can be nested in a sub-activity graph, and the constraint design can be carried out based on the test model besides the software functional logic description during modeling, wherein the constraint design is as follows:
designing data constraint: the input data in the data configuration is referred in the activity nodes in the activity graph, and data constraint conditions are added according to the defined data, namely, the constraint is represented by a logic expression, and the branch can be determined under what conditions. The reference definition data is referenced with a $ symbol, all defined data is automatically referenced quickly, with the logical operators { >, <, =, &, | |, |! And the like, and taking the value in the data variable value set once when the data variable is quoted once.
Presetting conditions: and adding preset conditions on the graphic elements of the active graph. The preset conditions are divided into two types: one is the preset condition of the whole test model, which is equivalent to the setting of a precondition before the execution of the function or scene expressed by the test model, and is mostly used for the execution of case sets; another is a preset condition for a certain activity, corresponding to a precondition that must be fulfilled before the activity is performed. The preset condition can be either a literal expression or a logical expression referring to the data.
And (3) time constraint: a time constraint may be expressed on an activity node of an activity graph. The time constraint is divided into absolute time and relative time, and specifically divided into time point constraint, duration constraint and cycle time constraint. The point-in-time constraint is a time that limits the time at which time is generated or ended; the duration constraint is to limit the task execution duration or wait time before starting; the cycle time constraint is to limit the time interval for the task to be repeatedly executed;
the sequential logical relationship is as follows: the following sequential logical relationships of 4 cases can be expressed on the activity diagram. Timing relationships between time points, timing relationships between time points and durations, timing relationships between durations, and timing relationships between time points and cycle times.
S5, binding keywords for driving the automatic execution of the test case with the test model, wherein the specific process is as follows:
and binding each primitive of the test model activity diagram with one or more keywords, wherein the keywords bound by the same primitive are separated by half-angle semicolons.
In order to realize the automatic execution of the keywords in the drive test, firstly, the definition of the keywords is needed, and the keywords mainly comprise the following three parts:
(1) A keyword function name;
(2) Parameters of the keyword function: derived from defined test data;
(3) Type of keyword function: business layer keywords, logic layer keywords and execution layer keywords;
s6, configuring a coverage strategy of the test case, and generating the test case with a fixed format according to specific test data by using the test model obtained in the S5 after the keywords are bound;
the coverage strategy comprises a data coverage strategy, a condition judgment coverage strategy, a path coverage strategy and a sub-graph coverage strategy.
And configuring a coverage strategy for each layer of active graph of the test model according to different test risk levels of the software to determine a test target, combining specific test data according to the data coverage strategy, and selecting path depth, a sub-graph bringing-in strategy and a condition combination based on the test model under the drive of the specific test data until a test case finally meeting the data coverage strategy, the condition judgment coverage strategy, the path coverage strategy and the sub-graph coverage strategy is generated.
Data coverage strategy: the invention provides data coverage strategy selection for each layer of graph, which comprises EC, 2-wise,3-wise, N-wise and non-coverage selection. Wherein EC is a single selection combination, and each factor appears at least once; n is the interaction strength among different data factors, under two special conditions, the single factor covers the standard, and the number of test cases generated by combination is C = max { A { i When the data factors are completely combined and covered, the number of test cases generated by combination is
Figure BDA0003763110040000071
Wherein A is i And the number of the selectable values in the data field for the ith factor after the test data generation processing.
Conditional decision override policy: the present invention provides conditional combinatorial coverage selection, default branch coverage when not hooked, similar to logical coverage in white-box testing.
Path coverage policy: the invention provides 1,2,3, n depth drop-down frame selection, adopts a depth-based path coverage strategy, takes a judgment node as a cross junction of two adjacent paths, calculates the coverage depth of the adjacent paths, and generates more test paths with higher set depth and higher coverage degree, finally generates test cases which are classified by the test paths, and each path may have a plurality of groups of data execution.
Sub-graph coverage strategy: including four relationships, i.e., an expansion relationship, a penetration relationship, and a combination relationship. The unfolding relationship is as follows: in order to display visually or conveniently, the subgraph is used as a part of a parent graph to form a graph independently, and the coverage strategy is consistent with that of the parent graph; the through relation is as follows: any through path in the subgraph is merged into the father graph, and the test thought is layered; the combination relation is as follows: all paths in the subgraph are fully combined with the connection paths of the parent graph, and the condition that a plurality of incoming paths or outgoing paths exist in the parent graph is considered.
S7, screening and confirming the test cases with the fixed format generated in the S6, and integrating the test cases obtained after screening and confirming to be used as a test case execution set;
and screening and confirming the generated test cases with fixed formats according to user-defined conditions or system default conditions, and integrating the test cases obtained after screening and confirmation into a test case execution set.
S8, binding and connecting the test case execution set with the test environments, and traversing the test case execution set by using each test environment after the binding is successful;
according to the test environment set by the system resources in the S2, whether the test cases can be successfully connected with the test environment is checked, and after the connection is successful, the system sends the test cases to each test environment one by one to carry out automatic execution on the test cases; otherwise, the operation ends. The detailed steps of the test case execution are as follows:
(1) And automatically generating a test case based on the data and the coverage strategy.
(2) And screening and confirming the final test cases, and forming a test case execution set by the obtained test cases.
(3) And binding and connecting the test case execution set with a test environment (test platform).
(5) And after the binding is successful, traversing the test case execution set by each test environment, and automatically executing the test cases one by one.
S9, driving each test case in the test case execution set to be automatically executed by using the keywords bound in the S5, wherein the specific process is as follows:
the automatic execution steps of a single test case are as follows:
s91, driving each test case in the test case execution set to generate a corresponding test script by using the keywords bound in S5, wherein the test script is used for calling a function of the keywords corresponding to the test case and displaying parameter values of the function; the specific process is as follows:
s911, analyzing the keyword function, which comprises the following three parts:
(1) Testing key operations to be executed, namely some key actions of actual operations;
(2) An operated object to be triggered by the key action, namely an operated control element;
(3) Data that must be provided for testing, i.e., test data.
S912, outputting the analyzed keywords according to a front-back sequence to test cases;
s913, assigning a plurality of values to the output parameters of the test data of the continuous frame type;
the data defined by the system resource can define data of a continuous frame type, and a specific data structure is similar to an array, for example, if data a has 3 frames, a [3] can be defined, and the parameter value is 3. (assignment array: attribute A = [1,2,3 ]).
S914, dereferencing the parameter value of the keyword as the dereferencing of corresponding test data, and outputting the original keyword without dereferencing to obtain a test script;
s92, sending the test script to a corresponding test environment in the test system;
s93, automatically executing the test script in the test environment, and feeding back the execution condition in real time;
s94, the test system records and dynamically displays the execution condition of the test script in real time, and the execution condition of the corresponding test case can be obtained;
s95, after the execution is finished, the test environment feeds back the execution result to the test system;
and S96, analyzing and updating the execution result of the test case by the test system, and outputting the execution result to a tester.
The test cases and the test scripts are in one-to-one correspondence, one test case is one test script, the test cases are traversed, the scripts corresponding to the test cases are handed to the test platform to be executed, the test platform feeds back results, and then the next test case is executed. The automatic execution of the embedded software test case can be realized, and the test of the embedded software is finally realized.
In the invention, when the application to be tested changes, the testing personnel only need to modify the corresponding key words and the specific logic thereof to realize the change, and the test data and the test logic are completely separated, so that the test case and the test script thereof become easy to maintain, and the test becomes more efficient.

Claims (10)

1. The software test case automatic execution method based on data and keyword driving is characterized in that: it comprises the following steps:
s1, establishing a model project;
s2, initializing system resources of the model engineering, and generating specific test data according to the initialized system resources;
s3, establishing a test model of the software according to the established model engineering and the generated specific test data;
s4, carrying out constraint design on the test model;
s5, binding keywords for driving automatic execution of the test case with the test model;
s6, configuring a coverage strategy of the test case, and generating the test case with a fixed format according to specific test data by using the test model obtained in the S5 after the keywords are bound;
s7, screening and confirming the test cases with fixed formats generated in the S6, and integrating the test cases obtained after screening and confirming to be used as a test case execution set;
s8, binding and connecting the test case execution set with the test environments, and traversing the test case execution set by using each test environment after the binding is successful;
and S9, driving each test case in the test case execution set to be automatically executed by using the keywords bound in the S5.
2. The method for automatically executing software test cases based on data and keyword driving as claimed in claim 1, wherein: initializing system resources of the model engineering in the S2, and generating specific test data according to the initialized system resources; the specific process is as follows:
s21, the system resources comprise data, time constraint and test environment of test cases;
setting parameters of data, wherein the parameters comprise data format, data type, data precision and data range;
the data format comprises single data, structural data and aggregate data;
s22, generating specific test data according to the parameters of the data;
the generation of the specific test data comprises the following four conditions:
setting a value taking domain of data, and respectively setting a valid class of the data and an invalid class of the data;
II, automatically sampling and generating a data effective value and a data invalid value according to an algorithm of an equivalence class and a boundary value through a set data value taking domain or a set identification logic operation expression;
III, generating continuous data;
and IV, generating time constraint by adopting a five-value method, wherein the five-value method comprises a left boundary-precision, a left boundary, a middle value, a right boundary and a precision value.
3. The method for automatically executing software test cases based on data and keyword driving as claimed in claim 2, wherein: the test model established in the S3 is a nestable activity diagram;
the graphic elements of the activity graph comprise action nodes, action streams, starting nodes, ending nodes, judging nodes, timer nodes, concurrency starting nodes and concurrency ending nodes.
4. The method for automatically executing software test cases based on data and keyword driving according to claim 3, wherein: the test path of the test model established in S3 includes:
(1) Common path: no loop exists, no node is judged, and no concurrent path exists;
(2) Circulation path: a looped path;
(3) Concurrent path: the method comprises the following steps that two or more activities are executed in the same time interval;
(4) The constraint function expression: and performing logic operation, arithmetic operation, time constraint and data operation and different types of time constraint operation on the judgment nodes of the test model.
5. The method for automatically executing software test cases based on data and keyword driving according to claim 4, wherein: and the constraint design in the S4 comprises data constraint design, preset conditions, time constraint and a time sequence logic relationship.
6. The method for automatically executing software test cases based on data and keyword driving according to claim 5, wherein: the keywords in the S5 comprise keyword function names, parameters of the keyword functions and types of the keyword functions; the types of the key word functions comprise service layer key words, logic layer key words and execution layer key words.
7. The method for automatically executing software test cases based on data and keyword driving according to claim 6, wherein: and in the step S5, keywords for driving the automatic execution of the test case are bound with the test model, and the specific process is as follows:
and binding each primitive of the test model activity diagram with one or more keywords, wherein the keywords bound by the same primitive are separated by half-angle semicolons.
8. The method for automatically executing software test cases based on data and keyword driving according to claim 7, wherein: in S9, each test case in the test case execution set is driven to be automatically executed by using the keywords bound in S5, and the specific process is as follows:
s91, driving each test case in the test case execution set to generate a corresponding test script by using the keywords bound in S5, wherein the test script is used for calling a function of the keywords corresponding to the test case and displaying parameter values of the function;
s92, sending the test script to a corresponding test environment in the test system;
s93, automatically executing a test script by a test environment;
s94, the test system records and dynamically displays the execution condition of the test script in real time, and the execution condition of the corresponding test case can be obtained;
s95, after the execution is finished, the test environment feeds back the execution result to the test system;
and S96, analyzing and updating the execution result of the test script by the test system, and outputting the execution result.
9. The method for automatically executing software test cases based on data and keyword driving according to claim 8, wherein: in S91, each test case in the test case execution set is driven by using the keywords bound in S5 to generate a corresponding test script, and the specific process is as follows:
s911, analyzing the keyword function;
s912, outputting test cases according to the front and back sequence of the analyzed keywords;
s913, assigning a plurality of values to the output parameters of the test data of the continuous frame type;
and S914, dereferencing the parameter values of the keywords as dereferences of corresponding test data, and outputting the keywords without dereferences in an original state to obtain the test script.
10. The method for automatically executing software test cases based on data and keyword driving according to claim 9, wherein: the step of analyzing the keyword function in the step S911 includes the following three steps:
(1) Testing key operations to be executed, namely some key actions of actual operations;
(2) An operated object to be triggered by the key action, namely an operated control element;
(3) The data that must be provided for testing, i.e., the test data.
CN202210884562.1A 2022-07-25 2022-07-25 Software test case automatic execution method based on data and keyword drive Pending CN115168226A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210884562.1A CN115168226A (en) 2022-07-25 2022-07-25 Software test case automatic execution method based on data and keyword drive

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210884562.1A CN115168226A (en) 2022-07-25 2022-07-25 Software test case automatic execution method based on data and keyword drive

Publications (1)

Publication Number Publication Date
CN115168226A true CN115168226A (en) 2022-10-11

Family

ID=83497812

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210884562.1A Pending CN115168226A (en) 2022-07-25 2022-07-25 Software test case automatic execution method based on data and keyword drive

Country Status (1)

Country Link
CN (1) CN115168226A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814053A (en) * 2010-03-29 2010-08-25 中国人民解放军信息工程大学 Method for discovering binary code vulnerability based on function model
US20150324274A1 (en) * 2014-05-09 2015-11-12 Wipro Limited System and method for creating universal test script for testing variants of software application
CN105068927A (en) * 2015-08-04 2015-11-18 株洲南车时代电气股份有限公司 Keyword drive-based automatic test method of urban rail drive control units
CN112783794A (en) * 2021-02-10 2021-05-11 西南电子技术研究所(中国电子科技集团公司第十研究所) Aviation communication radio station software test system
CN113986441A (en) * 2021-11-05 2022-01-28 中国航空无线电电子研究所 An automated testing method for aircraft ground station software man-machine interface

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101814053A (en) * 2010-03-29 2010-08-25 中国人民解放军信息工程大学 Method for discovering binary code vulnerability based on function model
US20150324274A1 (en) * 2014-05-09 2015-11-12 Wipro Limited System and method for creating universal test script for testing variants of software application
CN105068927A (en) * 2015-08-04 2015-11-18 株洲南车时代电气股份有限公司 Keyword drive-based automatic test method of urban rail drive control units
CN112783794A (en) * 2021-02-10 2021-05-11 西南电子技术研究所(中国电子科技集团公司第十研究所) Aviation communication radio station software test system
CN113986441A (en) * 2021-11-05 2022-01-28 中国航空无线电电子研究所 An automated testing method for aircraft ground station software man-machine interface

Similar Documents

Publication Publication Date Title
CN111694741B (en) Test case design method based on path depth coverage
EP2598989B1 (en) Developing programs in a graphical specification and constraint language
US8196113B2 (en) Realtime creation of datasets in model based testing
CN107273286B (en) Scenario automation testing platform and method for task application
CN103150249B (en) A kind of method and system of automatic test
US8225288B2 (en) Model-based testing using branches, decisions, and options
CN106021816B (en) A Realization Method of Behavioral Simulation Analysis Tool of Distributed System Based on Behavior Tree
CN104679488A (en) Flow path customized development platform and method
JP4619245B2 (en) Design verification method, apparatus, logic and system
US20050137839A1 (en) Methods, apparatus and programs for system development
CN116627418B (en) Multi-level form interface visual generation method and device based on recursion algorithm
CN118296856B (en) A method, device, medium and equipment for determining a forward design scheme of aerospace equipment
CN104268346B (en) A kind of implementation method of the Simulation Application visualization the integration environment based on object class interaction figure
US7895575B2 (en) Apparatus and method for generating test driver
Boucher et al. Transforming workflow models into automated end-to-end acceptance test cases
CN115168226A (en) Software test case automatic execution method based on data and keyword drive
CN120295845A (en) A signal interface generation method and system based on HIL test
Ferayorni et al. Domain driven simulation modeling for software design
CN115712492B (en) A collaborative simulation operation system
CN111258911A (en) A software test case generation method, system and storage medium based on data-driven and multiple coverage strategies
CN117762404A (en) Configurable operator processing method and device for data mining
Dechsupa et al. An automated framework for BPMN model verification achieving branch coverage
Sypsas et al. Computing similarities between virtual laboratory experiments models using petri nets
Van der Blom et al. Sparkle: Toward Accessible Meta-Algorithmics for Improving the State of the Art in Solving Challenging Problems
CN115509533A (en) A low-code front-end logic processing and orchestration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination