Optimal Test Case Prioritization In Regression

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

ABSTRACT:

A prioritized test suite is more likely to be more effective during that time period than would have been achieved via a random ordering if execution needs to be suspended after some time. An enhanced test case ordering may be probable if the desired implementation time to run the test cases is proven earlier. Our research work’s main intention is to prioritize the regression testing test cases. In order to prioritize the test cases some factors are considered here. These factors are employed in the prioritization algorithm. The trace events are one of the important factors, used to find the most significant test cases in the projects. Based on these factors the test cases are prioritized using modified Artificial Bee Colony (ABC) algorithm. In this research, the K-Means algorithm is used to find the optimal onlooker bee in ABC algorithm. Executing the test cases based on the prioritization will greatly decreases the computation cost and time. The proposed technique is efficient in prioritizing the regression test cases. The prioritized sub sequences of the given unit test suites are executed on Java programs after the completion of prioritization. Average of the Percentage of Faults Detected (APFD) is an evaluation metric used for evaluating the "superiority" of these orderings.

Keywords: Artificial Bee Colony (ABC), Regression Testing, Test Case, Average Percentage of

Faults Detected (APFD), Genetic Algorithm, K-Means algorithm.

1. INTRODUCTION:

Software testing and retesting occurs continuously during the software development lifecycle to detect errors as early as possible [3]. Software testing is a process continuously performed by the development team during the life cycle of the software with the motive to detect the faults as early as possible [14]. Software maintenance is an important and costly activity of the software development lifecycle [10]. Regression testing is the process of validating modified software to assure that changed parts of software behave as intended and unchanged parts of software have not been adversely affected by the modification [15]. Regression testing is an expensive part of the software maintenance process. Effective regression testing techniques select and order (or prioritize) test cases between successive releases of a program [1]. Regression testing analyzes whether the maintenance of the software has adversely affected its normal functioning. Regression testing is generally performed under the strict time constraints [12]. Regression test selection techniques reduce the cost of regression testing by selecting an appropriate subset of the existing test suite, based on information about the program, modified version, and test suite [11].

The cost and time required for regression testing can be minimized by using the prioritization technique [8]. In this situation, test case prioritization techniques aim to improve the effectiveness of regression testing by ordering the test cases so that the most beneficial are executed first [2]. Test-case prioritization is a typical scenario of regression testing, which plays an important role in software maintenance [4]. Test case prioritization has been an effective means to order test cases in regression test suites so that faults can be detected earlier [6]. Test case prioritization techniques have proven to be beneficial in improving regression testing activities [16]. Software testing has been proven that testing, analysis, and debugging costs usually consume over 50% of the costs associated with the development of large software systems [7]. Effective and reliable test case prioritization technique for regression testing is necessary to ensure optimum utility and no side effect in the software after modification [13]. Statistical fault-localization techniques use the execution information collected during testing to locate faults. Executing a small fraction of a prioritized test suite reduces the cost of testing [5]. Soft computing techniques are artificial intelligence based techniques and they are very powerful tool for solving extremely complex, nonlinear processes [9].

2. LITERATURE SURVEY:

Regression testing is the process of validating modified software to assure that changed parts of software behave as intended and unchanged parts of software have not been adversely affected by the modification. Regression testing is an expensive part of the software maintenance process. Effective regression testing techniques select and order (or prioritize) test cases between successive releases of a program. Regression testing analyzes whether the maintenance of the software has adversely affected its normal functioning. Regression testing is generally performed under the strict time constraints. Regression test selection techniques reduce the cost of regression testing by selecting an appropriate subset of the existing test suite, based on information about the program, modified version, and test suite. A brief review of some recent researches is presented here.

James A. Jones and Mary Jean Harrold [18] have presented two new algorithms for test suite reduction and one new algorithm for test-suite prioritization that can account for MC/DC when reducing and prioritizing test suites. They have also presented the results of empirical studies that evaluate these algorithms. The results achieved show the potential for substantial test-suite size reduction with respect to MC/DC. Such techniques significantly have reduced the cost of regression testing.

Pavan Kumar Chittimalli and Mary Jean Harrold [19] have presented a technique that provides updated coverage data for a modified program without running all test cases in the test suite that was developed for the original program and used for regression testing. Their technique was safe and precise. It have computed the same information as if all test cases in the test suite were rerun, assuming that the regression test selection technique that it leverages was safe.

Hyunsook Do et al. [20] have presented a series of controlled experiments assessing the effects of time constraints and faultiness levels on the costs and benefits of test case prioritization techniques. Their results showed that time constraints can indeed play a significant role in determining both the cost-effectiveness of prioritization techniques and the relative cost-benefit trade-offs among techniques. It also showed that when a software product contains a large number of faults, employing heuristics could be beneficial even when no time constraints apply. The results indicated that the benefits gained from early fault detection were high enough to compensate for the costs incurred by applying heuristics.

Shih-Chia Huang [21] has proposed a novel and accurate approach to motion detection for the automatic video surveillance system. Their method achieved the complete detection of moving objects by involving three significant proposed modules: a background modeling (BM) module, an alarm trigger (AT) module, and an object extraction (OE) module. The BM module used in their method was a unique two-phase background matching procedure was performed using rapid matching followed by accurate matching in order to produce optimum background pixels for the background model. Then the AT module eliminates the unnecessary examination of the entire background region, allowing the subsequent OE module to only process blocks containing moving objects. Finally, the OE module had formed the binary object detection mask in order to achieve highly complete detection of moving objects. The detection results produced by their method were both qualitatively and quantitatively analyzed through visual inspection and for accuracy, along with comparisons to the results produced by other state-of-the-art methods.

Aftab Ali Haider et al. [22] have proposed an expert system which have found a trade off among the quality aspects, technique used and level of testing based on objective function defined by the tester, quite similar to human judgment using fuzzy logic based classification. They have mainly focused on to find out the test suite that was optimal for multi-objective regression testing. Their proposed approach has been successful to produce better results in comparison to other CI techniques.

Carlos R. del-Blanco et al. [23] have proposed an efficient visual detection and tracking framework for the tasks of object counting and surveillance, which meets the requirements of the consumer electronics: off-the-shelf equipment, easy installation and configuration, and unsupervised working conditions. This was accomplished by a novel Bayesian tracking model that can manage multimodal distributions without explicitly computing the association between tracked objects and detections. It was robust to erroneous, distorted and missing detections.

Siavash Mirarab et al. [24] have proposed a novel approach for selecting and ordering a predetermined number of test cases from an existing test suite. This formed an Integer Linear Programming problem using two different coverage-based criteria, and uses constraint relaxation to find many close-to-optimal solution points. These points are then combined to obtain a final solution using a voting mechanism. The selected subset of test cases was then prioritized using a greedy algorithm that maximizes minimum coverage on an iterative manner. Their proposed approach has been empirically evaluated and the results show significant improvements over existing approaches for some cases and comparable results for the rest.

Hong Mei et al. [25] have proposed an approach to prioritizing test cases in the absence of coverage information that operates on Java programs tested under the JUnit framework an increasingly popular class of systems. Their approach have analyzed the JUnit test case Prioritization Techniques operating in the Absence of coverage information (JUPTA), the static call graphs of JUnit test cases and the program under test to estimate the ability of each test case to achieve code coverage, and then schedules the order of these test cases based on those estimates. The test suites that was constructed by dynamic coverage-based techniques retain fault detection effectiveness advantages, the fault-detection effectiveness of the test suites constructed by JUPTA was close to that of the test suites constructed by those techniques, and the fault-detection effectiveness of the test suites constructed by some of JUPTA’s variants was better than that of the test suites constructed by several of those techniques.

3. Problem Definition:

Regression testing is the process of validating modified software to detect whether new errors have been introduced into previously tested code, and provide confidence that modifications are correct. Whenever software is modified, a set of test cases are run and the comparison of new outputs is done with the older one to avoid unwanted changes. If the new output and old output match it implies that the modifications made in one part of the software don’t affect the remaining software. In the previous techniques it is impractical to re-execute every test case for a program if changes occur. In our proposed methodology it is designed to improve user perceived software quality in a cost effective way by considering potential defect severity and to improve the rate of detection of severe faults during system level testing of new code and regression testing of existing code.

4. PROPOSED METHODOLOGY:

Regression Testing Test suite optimization is one of the most important problems in Software engineering research. There are two approaches in optimizing the test suite such as Regression Test suite Selection and Regression Test suite optimization. In order to achieve Cost effective Test suite, we will develop a prioritization technique based on Modified Artificial Bee Colony algorithm. The objective of the proposed method is to generate efficient test cases in a test suite that can cover the given software under test within less time. In the proposed system, the individuals will be the test cases in the test suite and the nodes are the blocks of executable statements in the software. The artificial bees modify the test cases with time and the bee’s aim is to discover the places of nodes with higher factor values. The fitness value associated with each test case will be calculated by using the factors of the test cases. By this proposed methodology, the optimal test suite with less cost and time will be calculated which is represented in Fig. 1.

Factor measurement

Case Study

Modified ABC Algorithm

Optimized Test Cases

Test Case Generation

Test Cases

Coverage

Responsibility

Trace events

Time

Fig.1: Block diagram for proposed prioritization process.

4.1 Factor measurement

Test case prioritization techniques are more useful for enhancing the regression-testing activities. Through prioritization, the fault detection rate is increased, and thus testers are allowed to find faults earlier in the system-testing phase. However, most of the prioritization techniques till now are code coverage-based. These techniques may treat all faults evenly. In our proposed technique, the regression testing test cases are prioritized efficiently. Here we proposed one novel factor, trace events, which is normally used in coupling measurement. This trace events process will effectively identify the test cases which are used in runtime. For optimizing the test cases we are going to measure the factors such as trace events, coverage, time, and responsibility values.

Trace Events:

Trace events is one of the efficient technique used to trace the test cases in the runtime. Based on this, the factor will be assigned. Numerous techniques are available for gathering run-time information from projects. The operating system is shipped instrumented with key events. The user need only activate trace to capture the flow of events from the operating system. Application developers may desire to instrument their application code during development for tuning purposes. This provides them with insight into how their applications are interacting with the system. To add a trace event, design the trace records created by the program in reference to trace interface conventions. Then, include trace-hook macros to the program at the suitable locations. Traces can then be taken through any of the standard ways of invoking and controlling trace (commands, subcommands, or subroutine calls). The trace event algorithm for our proposed work is given below,

For each // Classes

For each // Test cases

Let be the first line of the method

Ct=Ct+1 // Trace records

End

End

Fig.2: Pseudo code for trace events

The above process is repeated until all the classes and methods in the project have been processed. While running, each and every class in the project will be invoked and when the first line of the test case is invoked then, the Ct for the particular test case is incremented by 1. The project repeatedly calculates the number of times the test case is called dynamically. These counts Ct are taken as one of the requirement factor for the test case prioritization process. It is dispensable to invoke all the test cases while running the project.

4.2 Responsibility dependency Graph:

The requirements for identifying the optimal test cases are obtained from the responsibility dependency graph. The responsibility represents the task or module of a system. A sample graph is shown in Fig.3 which represents the responsibility dependency graph for a sample test case.

Fig.3: Responsibility Dependency Graph

In this diagram, each node represents the responsibility or task and the directed lines represent the dependency between the tasks.

4.3 Coverage

1. Branch Coverage:

Branch coverage evaluates the nodes in the artificial bee colony whether it is in the true or false conditions. A branch is the outcome of a decision. Branch coverage measures which decision outcomes have been tested. Branch coverage is better than simple statement coverage. The number of branches in a method can be found easily. Boolean decisions probably have two outcomes, true and false, whereas they have one outcome for each case and the default case. The total number of decision outcomes in a method is equal to the number of branches that need to be covered plus the entry branch in the method.

Branch Coverage should be high for efficient software.

2. Code Coverage:

Code coverage is used in software testing. It refers to the coverage of building blocks of the program and can be used to determine the limitations of the tests. It gives the degree to which the source code of the program is tested. It verifies the code directly. Code coverage is used to measure the level of testing that was performed on software. Gathering coverage metrics is a straightforward process. In this process Instrument the code and run the tests against the instrumented version. This produces data which shows the code that has been did or, more importantly, did not execute. Coverage is the perfect substitute to unit testing. Unit tests shows whether the code has performed as expected, and code coverage says that what remains to be tested. Code Coverage should be high for efficient software.

3. State Coverage:

State coverage is used in software testing. It measures the extent of the program behavior checks. The statement coverage is also known as line coverage or segment coverage. The statement coverage covers only the true conditions. Through statement coverage the statements executed can be identified and because of blockage the code is not executed. In this process each and every line of code needs to be checked and executed. Statement Coverage should be high for efficient software. The optimistic state coverage of the test suite T is given as

Ni = the set of output-defining nodes of software under test subject to test ti

Vi = the set of covered output defining nodes of so subject to test ti

4. Time:

Time represents the execution time for each and every test cases in the case study. It will be in milliseconds (ms).

4.4 Optimization using modified ABC algorithm:

ABC is an algorithm which is explained by Dervis Karaboga in 2005, inspired by the smart behavior of honey bees. In [26], they have used ABC algorithm for data clustering. The colony of artificial bees has three set of bees in ABC algorithm; they are employed bees, onlookers and scouts. A bee which is waiting on the dance area for making a choice to pick a food source is called onlooker and a bee which goes to the food source that is selected by the onlooker is called employed bee. The other type of bee is scout bee that carries out unsystematic search for discovering novel sources. The position of a food source denotes a realistic solution to the optimization issue and the nectar value of a food source related to the quality (fitness) of the associated solution, estimated by,

The bees which has the fitness values as good enough is the result of this fitness. The detailed explanation of the ABC algorithm is as follows:

Initialize the population of the solutions.

Calculate the population.

Set; the cycle denotes an iterative value.

Create a novel solution in the neighborhood of using the following formula:

Where,

 Solution of

 Random number of range [-1, 1].

Apply the greedy selection process amid and based on the fitness.

Calculate the probability values for the solutions using their fitness values based on the following formula:

In order to estimate the fitness values of the solution we have used the following formula:

Normalize the values into [0, 1].

Create the novel solutions for the onlookers from the solutions depending on and calculate them.

Apply the greedy selection procedure for the onlookers amid and based on fitness.

Determine the abandoned solution (source), if exist, replace it with a novel unsystematically produced solution for the scout using the following equation:

Memorize the optimum food source position (solution) attained so far.

Cycle=cycle+1

Until, cycle=maximum cycle number.

In our proposed method we are going to modify the ABC algorithm with K means clustering algorithm. It is a hybrid algorithm which incorporates the K-means algorithm into the ABC algorithm. A widely used optimization technique is ABC algorithm and the K-means is widely used for its efficient clustering. The ABC algorithm is based on three stages; they are employed bee phase, onlooker bee phase and scout bee phase. The employed bee phase and the onlooker bee phase are the necessary phases in ABC algorithm and the scout bee phase is an unsystematic phase. So, we are applying the K-means algorithm in the third phase. The addition of the novel solution from the K-means after every cycle may increase the reach of ABC algorithm to a different level. Here we will input the factor values of test cases to modified ABC algorithm for prioritization.

The ABC algorithm has several dimensional search spaces in which there are Employed bees and Onlookers bees. The both bees are categorized by their experience in identifying the food source. The initial population is opted from the employed bee phase. The food location is possessed by this employed bee. The solution of the employed bee is altered in the onlooker bee stage based on the following formula:

Where,

 Solution obtained from the employed bee phase

 Randomly produced number of range [-1, 1]

 Random indexes in the solution matrix of employed bee

A novel solution is created based on the formula and the solution is applied in the fitness function to obtain the fitness value. If the new fitness value is better than the old one, then the new fitness value is selected and the old one would get eliminated. This process would last until the entire employed bee gets processed. The scout bee phase is the eventual stage of the ABC algorithm. This stage is implemented with the K-means operator in order to find the food source. The scout bee initiates the process by choosing the solution from the onlooker bee phase which poses the lowest fitness value. The onlooker bee phase generates diverse solution based on different values. The solution with least fitness value is selected and distance matrix is computed. Based on the distance values in the matrix, the data points are grouped with respect to the minimum distance value. Then, the centroid is computed by taking the mean values of the data points in the cluster. Then, centroid computed is given a new set of solution for scout bee phase. The process is repeated until we obtained the optimal solution. Hence we obtained the optimal set of test cases.

Fig. 4: Flow diagram for ABC algorithm

5. RESULTS AND DISCUSSION:

In our proposed method we have generated efficient test cases in a test suite that can cover the given software under test within less time. Our proposed method was implemented by using JAVA. The case study that we have used is Hospital management System. Nearly 3000 test cases are generated automatically by employing Net Beans software. After test case generation, the metrics values are calculated. Based on the metric values the test cases are prioritized using modified ABC algorithm. The gradual results obtained from the proposed methodology is described below

Table 1: coverage values for test cases

Test Cases

Code Coverage

Branch Coverage

Statement Coverage

Time

1.

22

5

12

0.001

2.

3

9

4

0

3.

41

0

22

0.003

4.

13

11

12

0.002

5.

15

8

9

0.004

Table I represents the metrics calculated for each test cases. The code coverage, branch coverage, statement coverage and time are some of the factors considered for prioritizing the test cases. The responsibility values for the test cases in this case study are described in Table II. After calculating all the measures, the test cases are prioritized using modified ABC algorithm with K-means algorithm.

Table II: Responsibility and Dependency Values

Test Cases

Responsibility

Dependency

1

Billing.java

Billing() Actionperformed(action event e)

clear implements ActionListener

public static void main(String args)

back implements ActionListener

submit extends Frame implements ActionListener

2

3

3

2

3

2

2

2

main()

Actionperformed(action event e)

clear implements ActionListener

login()

submit extends Frame implements ActionListener

2

2

2

2

2

Performance Analysis

Average of the Percentage of Faults Detected (APFD)

Here, the performance of the proposed system is evaluated by means of APFD metric. The APFD value is a measure that shows how hastily the flaws are identified for a given test suite set. Let be the test suite under evaluation, be the number of flaws contained in the program under test, be the total number of test cases, and be the position of the first test in that exposes fault . The following formula is used for calculating the APFD metric.

(5)

In our case study, the total number of test cases are more than 3000. So here we described some test cases to calculate the APFD metric values. The number of test cases n=7 and the number of faults f=5. This can be represented in Table III.

Table III: The Faults detected by the test suites in Project1

Test Cases

Faults

T1

T2

T3

T4

T5

T6

T7

F1

X

X

X

F2

X

F3

X

X

X

X

F4

X

X

X

F5

X

Number of test cases is 8, i.e.,, and the number of faults occur during the regression testing is 5, i.e., . Prioritize the test suits with test sequence as, and then the APFD metric after prioritization is

=

The APFD metric before prioritization is

=

This can be represented in the following graph. Fig.5 represents the APFD metric values, before and after prioritization of the test cases.

Fig.5: APFD comparison

6.1. Comparative analysis

Our proposed technique is compared with genetic algorithm. Both were compared on the basis of time and fitness values. The proposed technique converges with less time when compared to existing technique i.e., the proposed technique obtains the solution as earlier than that of GA. This can be represented in Table IV and Fig. 6.

Table IV: Convergence time Comparison between ABC algorithm and Genetic algorithm

Iterations

Convergence Time

Modified ABC Algorithm

Genetic Algorithm

ABC Algorithm

5

65

158

89

10

187

247

211

15

247

399

291

20

451

542

494

25

710

798

765

From the above Table it is clear that after all iterations the time consumed by genetic algorithm is high than the time consumed by the ABC algorithm. So we can conclude that ABC algorithm is better than genetic algorithm based on time.

Table V: Fitness Comparison between ABC algorithm and Genetic algorithm

Iterations

Fitness

ABC Algorithm

Genetic Algorithm

ABC Algorithm

5

308

161

243

10

391

177

278

15

390

154

291

20

402

161

286

25

410

158

295

Fig. 7 represents the fitness comparison of proposed method with GA and conventional ABC. Here the objective function is the maximization problem. With maximum coverage values and less execution time, we are prioritizing the test cases. The proposed methodology obtained maximum value when compared to the existing method. The fitness values for each iteration are represented in Table V.

Fig. 6: Convergence Time comparison

Fig. 7: Fitness comparison

Hence, the proposed modified ABC algorithm based test case prioritization technique can effectively prioritized the regression testing test cases with less time convergence. Thus, our planned technique of testcase prioritization process will minimize the re-execution time of the project by prioritizing the most significant test cases.

7. CONCLUSION:

To enhance the rate of recognition of severe faults, a new regression testing based test suite prioritization technique is planned for requirement based system level test cases in this research. The challenges coupled with regression testing test case prioritization recognize and evaluates this thesis. The planned technique utilizes the most capable factors to order the test cases. In the planned method, the trace events procedure and dependency values are the beneficiary thing. These factors find the significant test cases in the project. Based on the factors the test cases are prioritized using modified ABC algorithm with K-Means. By means of the APFD metric, the effectiveness of the planned prioritization technique is evaluated. A sample application project is employed in this planned technique. With the planned technique, a better severe fault detection rate was achieved. The proposed method is compared with GA and conventional ABC in terms of convergence time and fitness. The numbers of test cases run to recognize the injected flaw is less in case of planned prioritized execution of test cases which is tested experimentally. Additionally, the planned method is effectively prioritizing the test cases based on the factors measured, which results in minimizing the cost of time of executing the whole project.

REFFERENCES:

Xiao Qu, Myra B. Cohen and Katherine M. Woolf, "Combinatorial Interaction Regression Testing: A Study of Test Case Generation and Prioritization", in proceedings of the International Conference on Software Maintenance , pp. 255-264, 2007.

Zheng Li, Mark Harman, and Robert M. Hierons, "Search Algorithms for Regression Test Case Prioritization", IEEE Transactions on Software Engineering, Vol. 33, No. 4, April 2007.

Dennis Jeffrey and Neelam Gupta, "Test Case Prioritization Using Relevant Slices", Journal of Systems and Software, Vol. 81, No. 2, pp. 196-221, February 2008.

Shan-Shan Hou, Lu Zhang, Tao Xie3, Jia-Su Sun, "Quota-Constrained Test-Case Prioritization for Regression Testing of Service-Centric Systems", in Proceedings of the 24th International Conference on Software Maintenance, pp. 257-266, October 2008.

Bo Jiang, Zhenyu Zhang, T.H. Tse and T.Y. Chen, "How Well Do Test Case Prioritization Techniques Support Statistical Fault Localization", in Proceedings of the 33rd Annual International Computer Software and Applications Conference, 2009.

Lijun Mei, Zhenyu Zhang, W.K. Chan and T.H. Tse, "Test Case Prioritization for Regression Testing of Service-Oriented Business Applications", in Proceedings of the 18th International Conference on World Wide Web, 2009.

Siripong Roongruangsuwan and Jirapun Daengdej, "Test Case Prioritization Techniques", Journal of Theoretical and Applied Information Technology, 2010.

Arup Abhinna Acharya, Durga Prasad Mohapatra and Namita Panda," Model Based Test Case Prioritization for Testing Component Dependency in CBSD Using UML Sequence Diagram", International Journal of Advanced Computer Science and Applications,Vol. 1, No. 6, December 2010.

Miodrag T. Manic, Dejan I. Tanikic, Milos S. Stojkovic and Dalibor M. denadic," Modeling of the Process Parameters using Soft Computing Techniques", in Proceedings of theWorld Academy of Science, Engineering and Technology, 2011.

Sonali Khandai, Arup Abhinna Acharya and Durga Prasad Mohapatra," Prioritizing Test Cases Using Business Criticality Test Value", International Journal of Advanced Computer Science and Applications, Vol. 3, No. 5, 2011.

R. Kavitha and N. Sureshkumar, "Factors Oriented Test Case Prioritization Technique in Regression Testing", European Journal of Scientific Research , Vol.55, No.2 ,pp.261-274, 2011.

Prem Parashar, Arvind Kalia and Rajesh Bhatia, "How Time-Fault Ratio helps in Test Case

Prioritization for Regression Testing", International Journal of Software Engineering, Vol.5 No.2, July 2012.

N. Prakash and T. R. Rangaswamy," Multiple Criteria Based Test Case Prioritization for Regression Testing", European Journal of Scientific Research, Vol.84, No.1, pp.36 – 45, 2012.

Aseem Kumar, Sahil Gupta, Himanshi Reparia And Harshpreet Singh," An Approach For Test Case Prioritization Based Upon Varying Requirements", International Journal of Computer Science, Engineering and Applications (IJCSEA) Vol.2, No.3, June 2012.

Suman and Seema," A Genetic Algorithm for Regression Test Sequence Optimization", International Journal of Advanced Research in Computer and Communication Engineering, Vol.1, September 2012.

E. Ashraf, A. Rauf, and K. Mahmood," Value based Regression Test Case Prioritization", in Proceedings of the World Congress on Engineering and Computer Science, Vol. I, October 2012.

James A. Jones and Mary Jean Harrold, "Test-Suite Reduction and Prioritization for Modified Condition/Decision Coverage", IEEE Transactions on Software Engineering, Vol. 29, No. 3, March 2003.

Pavan Kumar Chittimalli and Mary Jean Harrold, "Recomputing Coverage Information to Assist Regression Testing", IEEE Transactions On Software Engineering, Vol. 35, No. 4, July/August 2009.

Hyunsook Do, Siavash Mirarab, Ladan Tahvildari and Gregg Rothermel, "The Effects of Time Constraints on Test Case Prioritization: A Series of Controlled Experiments", IEEE Transactions on Software Engineering, Vol. 36, No. 5, September/October 2010.

Shih-Chia Huang, "An Advanced Motion Detection Algorithm with Video Quality Analysis for Video Surveillance Systems", IEEE Transactions on Circuits and Systems for Video Technology, Vol. 21, No. 1, January 2011.

Aftab Ali Haider, Shahzad Rafiq and Aamer Nadeem, "Test Suite Optimization using Fuzzy Logic", IEEE Transactions on Software Engineering, 2012.

Carlos R. del-Blanco, Fernando Jaureguizar, and Narciso García, "An Efficient Multiple Object Detection and Tracking Framework for Automatic Counting and Video Surveillance Applications", IEEE Transactions on Circuits and Systems for Video Technology, 2012.

Siavash Mirarab, Soroush Akhlaghi and Ladan Tahvildari, "Size-Constrained Regression Test Case Selection using Multicriteria Optimization", IEEE Transactions on Software Engineering, Vol. 38, No. 4, July/August 2012.

Hong Mei, Dan Hao, Lingming Zhang, Lu Zhang, Ji Zhou, and Gregg Rothermel, "A Static Approach to Prioritizing JUnit Test Cases", IEEE Transactions on Software Engineering, Vol. 38, No. 6, November/December 2012.

DervisKaraboga, CelalOzturk, "A novel clustering approach: Artificial Bee Colony (ABC) algorithm", Applied soft computing, vol. 11, pp: 652-657, 2011.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now