Development Of Slicing Based C Debugger

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Program slicing is a technique to extract program parts which are directly or indirectly related to the computation of a variable of interest. Since Weiser first proposed the notion of slicing in 1979, different notions of slicing came into existence thereafter. Different notions of slicing have different properties and different applications. These notions vary from Weiser’s syntax-preserving static slicing to amorphous slicing which is not syntax-preserving, and the algorithms can be based on dataflow equations, information-flow relations or dependence graphs. In the present work static program slicing technique has been used to assist in the task of debugging particularly in the academic area .The purpose of this thesis work was to develop an interactive debugging tool for programs written in a subset of C, based on the program slicing method.

The programming part of this thesis work consist of the design and implementation of a program slicing tool, called CSlicer, for debugging programs written in a subset of C. The CSlicer software tool is an interactive debugging tool that can be used to help debug simple structured C programs. The CSlicer started with constructing static backward slices and thereafter the information stored in constructing such slice was used in the design of algorithm that would construct the forward slices in an iterative manner .The algorithm was designed and implemented. The tool also helps the user to implement dicing technique to further reduce the size of slice which fastens the process of debugging. The CSlicer program is written in C, and developed on Bloodshed Dev-C++, following procedural abstraction technique. The simulated result shows the efficiency of slicing in the area of debugging followed by a small scale evaluation.

Table of Contents

Chapter No. Title Page No.

ABSTRACT…………………………………………….(i)

LIST OF FIGURES

Chapter-1 INTRODUCTION

(iii)

List of Figures

Chapter-1 INTRODUCTION

INTRODUCTION

Debugging is the task of identifying faults in code. A goal of the debugger, the

person who debugs the code, is to localize the fault area of the code, and at the same time, to develop an understanding of the program so that an adequate correction can be made. Working toward this goal is a labor-intense and time-consuming activity. As a result, it is critical to identify debugging strategies that lead to quick reduction of the code fault area coupled with increased understanding.

The probability of a program to be compiled initially without errors is bleak. Such errors include errors in either design or in coding [Borland90]. Debugging techniques are used to identify and fix errors. Debugging techniques attempt to localize the cause of errors in a program and correct them [Brown and Sampson73]. It is generally difficult to find errors merely by observing the afflicted program's behaviour. As the size of a program increases, the cost associated with debugging generally increases. The debugging process becomes more difficult especially when programs written by other people are involved [Korel and Laski88]. The importance of good debugging tools cannot be overemphasized. Average programmers may spend considerable amounts (possibly more than 50%) of their program development time debugging. Several tools are available to help them in this task, varying from hexadecimal dumps of program state at the time of failure to window- and mouse-based interactive debuggers using bit-mapped displays. Most interactive debuggers provide breakpoints and traces as their main debugging aids. Unfortunately, these traditional mechanisms are often inadequate for the task of isolating specific program faults.

A number of methods, tools, and approaches have been developed to debug programs. Debugging approaches include file printing utilities, module testing packages, and built-in language facilities. Program Slicing is another debugging approach. Program slicing, as an approach to debugging, is based on the assumption that it is easier to locate errors in programs of smaller size rather than in the original source program of larger size. Program slicing focuses on the statements that are associated with one or more variables of interest defined as criterion variables [Samadzadeh and Wichaipanitch93].

The program statements that are not related to the criterion variable are omitted. Program slicing is based on data and control flow analysis. It is applied to programs after they are written. Hence program slicing is primarily useful for the maintenance rather than the design of software [Nanja90]. Using a slicing method, one can obtain a new smaller program (or a program of the same size, in the worst case) that preserves part of the original program's behavior for a particular output or variable [Weiser84]. Program slicing can be categorized into static slicing and dynamic slicing depending upon the slicing algorithm and approach. Static slicing is a method of computing program slices directly from the original source program [Weiser84]. Dynamic slicing, is a method used to compute program slices from the executable part of the original source program [Korel and Laski88] [Agrawal and Horgan90] [Samadzadeh and Wichaipanitch93].

1.2 Problem Statement

Programmers spend considerable time debugging code. Several tools are available to help them in this task, varying from hexadecimal dumps of program state to window- and mouse-based interactive debuggers, but the task still remains complex and difficult. Most of these conventional debuggers provide breakpoints and traces as their main debugging aids.

The main purpose of this thesis was to implement a program slicing algorithm in generating a program slice. An interactive debugging tool called c-slicer was developed for debugging a subset of C programs. The c-slicer program was designed and developed based on established slicing techniques. The debugging tool was developed particularly for providing assistance in academic area. The users of the tool would find it beneficial in debugging their c code in comparison to conventional debuggers.

The tool will be capable of computing the statements affecting the value of variable as well as the statements getting affected by modifying the variable. The tool will also allow furthering reducing the slice size by using dicing technique. The c-slicer program was implemented using C language in "Bloodshed Dev-C++. It was designed so that it can help debug programs involving straight-line code, control statements such as if, for, while, do, and switch. It can also handle expressions manipulating simple pointers to int, char, and float. Due to time constraints, structures, unions, and user-defined variables were not included in the scope of this thesis.

1.3 Thesis Layout

The rest of this thesis report is organized as follows. Chapter II discusses

different debugging methods and tools that are present on Linux and some

other systems. It also introduces static slicing, dynamic slicing, and the different approaches used in implementing dynamic slicing. The chapter also explains different types of slicing some of which are syntax preserving and some are syntactic preserving. The chapter concludes by outlining different application area in which the slicing is found useful and discusses on related work in program slicing.

Chapter III outlines the design aspects of the CSlicer program, including the

data structures and the different algorithms that were used in designing and

developing the CSlicer program. The chapter also discusses a prototype

evaluation of the CSlicer program followed by display of output screen which

resulted from execution of CSlicer program. Chapter III concludes with

summary and some of the possible future enhancements that can be done to

the CSlicer program.

Chapter-2 LITERATUTRE REVIEW

Debugging

Debugging is the process of identification of the symptoms of failures, tracing the bug, locating the errors that caused the bug and correcting these errors. To facilitate better understanding of debugging, it is appropriate to define the concepts of error, bug, fault, and defect [Nanja90].

2.1.1 Definitions

a.) Error. An error is a syntactic discrepancy that can result in faults in software. Errors occur inevitably while writing programs. Sources of errors can be briefly summarized [Wichaipanitch92] as follows:

Error in specifying the problem definition. This results in solving a wrong problem.

Error due to a wrong algorithm. This error occurs due to choosing a wrong algorithm for a given problem.

Semantic errors due to lack of proper knowledge of how a command or a programming construct works.

Errors resulting from incorrect programming of an algorithm.

Syntactic errors in a program that occur due to lack of sufficient knowledge about or proficiency in programming language concepts.

Data errors resulting from failure to predict the ranges of various data items correctly.

b.) Fault. A discrepancy in software, which can impair its ability to function as desired, is referred to as a fault. Faults can lead to the generation of incorrect output values for a given input. For instance, faults may occur when input variables are not initialized.

c.) Defect. A discrepancy between the code and the corresponding documentation, which may result in severe consequences in the process of installation, modification, maintenance, and testing, is known as a defect.

d.) Bug, Debugging, and Debugger. A bug in a computer program is an error that is due to either syntax or logical errors. Debugging attempts to locate such errors (without introducing new errors) and correct them. A debugger is a software tool that gives a user control over program execution status. A user can observe and control the execution of a program and fix the bugs by comparing it with the specified intention. It should be noted that as the sizes of programs increase, the bugs associated with them also increase. As the number of bugs increases, the cost associated with debugging also increases. It is a well known fact that almost fifty percent of the cost involved in software development is associated with debugging and correcting the errors in the program during the testing phase [Tasse174]. Reducing the occurrence of errors in programs is one of the ways to decrease the cost associated with debugging.

e.) Testing. Testing is the process of attempting to verify the correctness of a program in its execution. Testing differs from debugging in that testing is used to test the correctness of a program whereas debugging is used to localize the cause of errors and to correct them [Brown and Sampson73]. The process of debugging and correcting errors of the program can be considered a part of the testing step.

2.1.2 Debugging Steps

Debugging can be broadly classified into three steps.

1. Identifying the bug: It is important in the debugging process to identify the bug in the testing process by studying the code. This process becomes more complicated when the size of the program increases. If the bug cannot be identified, the scope or set of statements must be narrowed down and the code needs to be studied again.

2. Identifying the cause of the bug: Once a bug is identified, the second and harder part is to identify the cause of the bug. The search for the cause is generally in that part of the program where the bug exists, rather than the whole program.

3. Fixing the problem: Once a bug and its cause are determined, necessary actions must be taken to rectify the problem. The program is then compiled again and tested for other bugs. If any new or residual bugs still exist, then the above debugging process may be repeated until no more errors are practically detectable in the program.

2.1.3 Debugging Approaches

Debugging is not an exact science; it is called an art in the sense that it is difficult to learn and to teach about debugging. Most programmers are trained in programming, but very rarely are they trained for debugging. It is as true today as ever that it is a difficult process to find bugs and to correct programs. Some of the common debugging approaches are briefly described below:

Bottom-Up Approach: Concentrate on debugging a program's lowest-level functions (which do not call other functions) first. Then work upward towards the main part. In this way one obtains a foundation of reliable functions that can be used to step over when they are called in other parts of the code.

Look for Classes of Bugs: When a bug is identified look for bugs of

similar kind in the same part of the program.

I/O-Based Approach: This approach comprises of five steps which are

summarized as follows :

a. Feed the program some input and trace the code. Watch expressions to

check the values of output. Correct the bugs if found.

b. Feed the program with other sets of data that will access the parts of the

program that are not accessed from the preceding step.

c. Test every statement in the program. Be alert for statements or

expressions that must be tested in more than one way.

d. Concentrate on boundary conditions, which can make a program escape

from a loop.

e. When a modification is made to a program, retest the affected parts

thoroughly. If a program is complex, keep a record of the tests

performed on the program in the earlier steps. This record will help in all

tests whose results could possibly be affected by the change. Once the

above iteration is done, test the entire program for correct behaviour. Test

its response to every type of error it could possibly encounter, within the

practical limits of time and effort.

Incremental Approach: To localize the cause of errors, adopt incremental

testing. This process is feasible only when a programmer is conversant with various programming constructs. Moreover, the programmer should be able to understand the program that is to be tested reasonably well. These things lay emphasis on the skills of the programmer involved.

Logical Approach: Use logical reasoning in determining the cause of errors. This process is done manually and becomes more difficult in dealing with large and complex programs.

Trace-Based Approach: Perform a program trace to determine when the

program started performing incorrectly. This process becomes more difficult when dealing with large and complex programs. This approach depends upon the programmer's skills and knowledge acquired from experience, because experience will be of great help in identifying the elements of the program that are to be traced and in interpreting the trace information generated.

A mistake in the human thought process made during the construction of a program is called an error. Evidence of errors comes through program failures, typically incorrect output values, unexpected program termination, or non-terminating execution. It is often the case that the root cause of a failure can be traced to a small area of a program. If so, that area is said to contain a fault.

It is important to note that sometimes program failures are indications of global problems such as mistaken assumptions or inappropriate architectural decisions. In such cases, it is misleading to assume that editing a small area of the program will prove sufficient to correct an error. At the start of debugging, as far as the debugger is concerned, the code fault area can be anywhere in a program. It is the task of the debugger to reduce this range as much as possible. No one method of code reduction is favoured universally by people who debug programs. Rather different people prefer different methods. Most of the experienced programmers would use a subset, if not all of the aforesaid approaches or switch among them, while debugging programs that are unfamiliar [Nanja90].

2.1.4 Debugging Tools

Historically speaking, when debugging techniques were introduced, programmers needed to understand all aspects of a source program and localize the part of the program that did not function as expected. This period is known as "without-tool" generation [Nanja90]. Later on, several debugging tools were developed. In the first generation, debugging tools were based on specific machine architectures. Such tools are used to provide memory dumps and absolute instruction traces, and are called low-level debuggers. In the second generation, tools were designed and developed to provide the memory location address for a variable while debugging. In the third generation, the debugging tools were capable of some deductions regarding the presence of errors in programs. Examples for low-level debuggers include UNIX adb and DOS-Debug. Examples for high-level debuggers include symbolic debuggers, knowledge-based debuggers, data-base debuggers, and slicing-based debuggers.

2.1.4.1 Symbolic Debuggers

Symbolic debuggers provide information based on the programming language

used to write the programs which are to be debugged. Contents of the

variables in the program can be examined without mentioning the actual

addresses of the variables. This type of debuggers provides various options

such as tracing the variable, setting watch break points, and line by line

execution.

The main advantage of symbolic debuggers, when compared with low-level

debuggers, is that there is no need to know the specific machine architectures.

Examples of this type of debuggers can be found on VAX and UNIX systems.

The symbolic debugger on VAX is called VAX-DEBUG and can be used to

debug the programs that are written in assembly languages, FORTRAN,

BLISS, Basic, COBOL, Pascal, and PIlI [Nanja90]. The symbolic debugger on

UNIX is called sdb which supports FORTRAN, C, and C++.

2.1.4.2 Symbolic Debugger on Sequent Symmetry.

. A Symbolic debugger present on the Sequent Symmetry is pdbx. It can be

used for source-level debugging and execution of both conventional and

parallel programs. At present, this tool can be used to debug Pascal, Fortran,

C, and C++ programs. This tool can be invoked by the command pdbx or dbx.

When invoked by the command dbx, it can debug only conventional one-

process, one- program applications. DBX is a source-level debugger found

primarily on Solaris, AIX, IRIX, Tru64UNIX, GU/Linux  and BSD operating

systems. It provides symbolic debugging for programs written in C , C++ ,

Pascal, FORTRAN and Java. Useful features include stepping through

programs one source line or machine instruction at a time. In addition to

simply viewing operation of the program, variables can be manipulated and a

wide range of expressions can be evaluated and displayed. To debug using

pdbx, a program should be compiled using option -g on the command line.

This produces a file called execfile which contains the symbol table that

includes the names of all the source files translated by the compiler.

This makes available all the source files for perusal while using the debugger.

It is perhaps worth noting that if the program is compiled with the -g option,

the executable code generated is saved into an "a.out" file. To change this

option, one needs to compile the source code with the option -0 to redirect the

executable code into another file.

Special Option for C on LINUX : To compile a c program in linux, one has to give the command gcc <filename>. To debug C programs, one needs to

invoke gdb using the –g option on the command line.

The following example illustrates some of the features supported by gdb.

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

size_t

foo_len (const char *s)

{

return strlen (s);

}

int

main (int argc, char *argv[])

{

const char *a = NULL;

printf ("size of a = %d\n", foo_len (a));

exit (0);

}

Fig- : Sample Program

Using the GCC compiler on GNU/Linux, the code above must be compiled

using the  -g  flag in order to include appropriate debug information on the binary generated, thus making it possible to inspect it using GDB.

Assuming that the file containing the code above (Fig-1) is named as example.c , the command for the compilation could be:

gcc example.c -g -o example

And the binary can now be run:

# ./example

Segmentation fault

Since the example code, when executed, generates a segmentation fault, GDB can be used to inspect the problem.

# gdb ./example

GNU gdb (GDB) Fedora (7.3.50.20110722-13.fc16)

Copyright (C) 2011 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details.

This GDB was configured as "x86_64-redhat-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>...

Reading symbols from /path/example...done.

(gdb) run

Starting program: /path/example

Program received signal SIGSEGV, Segmentation fault.

0x0000000000400527 in foo_len (s=0x0) at example.c:8

8 return strlen (s);

(gdb) print s

$1 = 0x0

The problem is present in line 8, and occurs when calling the function 

strlen (because its argument, s, is NULL).

Depending on the implementation of strlen (in-line or not), the output can be

different, example below:

# gdb ./example

GNU gdb (GDB) 7.3.1

Copyright (C) 2011 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details.

This GDB was configured as "i686-pc-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>...

Reading symbols from /tmp/gdb/example...done.

(gdb) run

Starting program: /tmp/gdb/example

Program received signal SIGSEGV, Segmentation fault.

0xb7ee94f3 in strlen () from /lib/i686/cmov/libc.so.6

(gdb) bt

#0 0xb7ee94f3 in strlen () from /lib/i686/cmov/libc.so.6

#1 0x08048435 in foo_len (s=0x0) at example.c:8

#2 0x0804845a in main (argc=<optimized out>, argv=<optimized out>) at example.c:16

To fix the problem, the variable a (in the function main) must contain a valid string. Here is a fixed version of the code:

#include <stdio.h>

#include <stdlib.h>

#include <string.h>

size_t

foo_len (const char *s)

{

return strlen (s);

}

int

main (int argc, char *argv[])

{

const char *a = "This is a test string";

printf ("size of a = %d\n", foo_len (a));

exit (0);

}

Fig-: Fixed version of the sample code in Fig-1

Recompiling and running the executable again inside GDB now gives a correct result:

GNU gdb (GDB) Fedora (7.3.50.20110722-13.fc16)

Copyright (C) 2011 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details.

This GDB was configured as "x86_64-redhat-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>...

Reading symbols from /path/example...done.

(gdb) run

Starting program: /path/example

size of a = 21

[Inferior 1 (process 14290) exited normally]

GDB prints the output of printf in the screen, and then informs the user that the program exited normally.

2.1.5 Slicing-Based Debuggers

Slicing-based debugging tools produce a slice of a program depending on the

variable(s) of interest. A slice can be either executable or not depending on the slicing criterion and the slicing method utilized. A slice is a set of program statements that directly or indirectly contribute to the values assumed by a set of variables at some program point [Weiser84] [Venkatesh91]. These debuggers are high-level debuggers that can be used to locate the cause of errors.

How Does One Debug?

Given that a program has failed to produce the desired output, how does one go about finding where it went wrong? Other than the program source, the only important information usually available to the programmer is the input data and the erroneous output produced by the program. If the program is sufficiently simple, it can be analyzed manually on the given input. However, for many programs, especially lengthy ones, such analysis is much too difficult to perform. One logical way to proceed in such situations would be to think backwards—deduce the conditions under which the program produces the incorrect output. Slicing-based debugging tools enables users to follow their "natural" thought process while debugging.

Traditionally, in order to understand a program’s behavior, a programmer uses conventional debuggers that support breakpoint facilities and step-wise program execution. Breakpoints allow a programmer to specify places in a program where the execution should be suspended. When a breakpoint is reached and the execution is suspended, the programmer can then examine various components of the program state and check the correctness of values of variables. Programmers may also execute a program in a step-wise manner in order to observe the program execution.

Conventional debuggers, however, do not provide any means for identification of contributing program parts of the program being debugged. Using debuggers is an inefficient and time consuming approach of understanding of program behaviour, especially when a programmer is interested in observing only these parts of the program behaviour that relate to the incorrect output. The programmer may observe a large amount of unrelated computation and it is frequently almost impossible for him/her to distinguish related computations from unrelated computations. In order to make the process of program debugging more efficient it is important to focus the programmer’s attention on the "essential" components (statements, variables, etc.) of the program and its execution.

2.2 Program Slicing

Program slicing is a source-to-source transformation that can be used in

construction, testing, analysis, and debugging of programs [Weiser84] [Venkatesh91]. Program slicing was introduced by Mark Weiser [Weiser81]. Program slicing is used to localize errors in programs. Slicing is concerned with the variables of interest that are called criterion variables. The statements involving other variables are omitted. In general, one obtains a new program of smaller size that still retains all aspects of the original program's behaviour with respect to the criterion variable. Program slicing decomposes a large program into relatively smaller programs that are called slices [Weiser81].

Operationally, a slice of a program represents a subset of the program's behaviour over all possible inputs. The implication of the definition of a slice is that one could execute a slice of a program to obtain the values of the criterion variable [Venkatesh91]. Moreover there can be different slices satisfying the definition for a given program point and a criterion variable. There is at least one slice for a given slicing criterion; the program itself. A statement-minimal slice is defined as a slice with the least number of statements. Finding a statement-minimal slice is reducible to the halting problem which is unsolvable, but one can find approximate slices using data and control flow [Weiser84].

The advantages of slices and slicing methods are based on four facts, as stated

below [Weiser84] :

Slices can be found automatically by methods used to decompose programs by analyzing their data flow and control flow.

A slice is normally smaller than the original program.

Slices can be executed independently of one another. A slice is itself an

executable program whose behaviour must be identical to a specified subset of the original program's behaviour.

Each slice produces exactly one projection of the original program's behavior.

The problems with slices are listed below:

They may prove expensive to find for some programs.

Producing slices for some large complex programs may be difficult. There

may not be any significant slices for a program.

Their total independence may cause additional complexity in each slice that

could be cleaned up if some simple dependence can be found.

Selection of slicing variables may create problems.

In general, it is easy to find significant slices for large classes of programs. Program slicing can be categorized into static slicing and dynamic slicing depending upon the slicing criterion.

2.2.1 Static Slicing

Static slicing is defined on the basis of all computations of a program. Static slicing produces a program segment that consists of those statements that may possibly be executed if the program is sliced according to the desired criterion [Weiser84]. Static slicing is the method of computing slices directly from the original source program. A slice obtained by a static slicing criterion is called a static slice. Generally, it is easy to find a static slice for a program as compared to obtaining a dynamic slice.

In general, a slicing criterion of a program P is a tuple <i, v>, where i is a statement in P and v is a subset of the variables in P [Weiser84]. According to Weiser, "a slice can be defined behaviourally as any subset of a program which preserves a specified projection of its behaviour" [Weiser84].

2.2.1.1 Static Slicing Approach

Static slicing can be approached in terms of program reachability using Program Dependence Graph (PDG). PDG is a directed graph with vertices corresponding to statements and control predicates, and edges corresponding to data and control dependences. The slicing criterion is identified with a vertex in the PDG, and a slice corresponds to all PDG vertices from which the vertex under consideration can be reached. The slices are computed by gathering statements and control predicates by way of a backward traversal of the program’s control flow graph (CFG) or PDG, starting at the slicing criterion. Hence, these slices are referred to as backward static slices. Reps and Bricker were the first to use forward static slice terminology. Informally, a forward slice consists of all statements and control predicates dependent on the slicing criterion, a statement being "dependent" on the slicing criterion if the values computed at that statement depend on the values computed at the slicing criterion, or if the values computed at the slicing criterion determine the fact if the statement under consideration is executed or not. Backward and forward slices are computed in the same way. Forward slice requires tracing dependences in the forward direction. In other words a backward slice contains statements of a program which has some effect on the slicing criterion. It helps the developer to locate the parts of the program that contains a bug.

Consider the following sample portion of a program.

scanf("%d",& terminate_var);

product=1;

sum=1;

for(counter=1; counter<=terminate_var; counter++)

{

sum=sum+counter;

product=product*counter;

}

average = (sum-1)/terminate_var;

printf("\n The Sum is : %d ",sum);

printf("\nThe Product is : %d " , product);

printf("\n The Average is : " ,average);

Fig- : Sample program fragment

Above program (figure 3) produces a result of the variable sum with a big value. To locate a bug, backward slice is conducted on the variable sum, to find out the lines that contribute to the incorrect value. Backward slice is shown in figure 4. It shows the value of sum stored as 1. Since sum is a running total it has to be initialized to zero. Therefore to correct the bug replace the assignment sum=1 with the assignment sum=0.

sum=1;

for(counter=1; counter<=terminate_var; counter++)

{

sum=sum+counter;

}

printf("The Sum is : %d",sum);

Fig- : Backward Slice for criterion variable sum

A forward slice contains the statements of the program which are affected by the slicing criterion. It is used to predict the parts that will be affected by modification in that program.

The forward slice for figure 3 is shown in figure 5. Forward slice on variable sum can identify the ripple effects caused due to the change in the value of the variable sum. Fixing a bug would introduce a bug in the assignment of average. Hence that statement can be replaced by

average=sum/terminate_var.

From the graphical representation view (PDG), a backward slice is with respect to a slicing criterion consists of the set of nodes that directly or indirectly affect the computation of the variables in V at the node P. A forward slice is defined as the set of program statements and predicates that are affected by the computation of the value of a variable V at a program point P.

scanf("%d",& terminate_var);

product=1;

sum=0;

for(counter=1; counter<=terminate_var; counter++)

{

sum=sum+counter; /* Affected statement */

product=product*counter;

}

average=(sum-1)/terminate_var; /* Affected statement */

printf("\n The Sum is : %d ",sum); /* Affected statement */

printf("\nThe Product is : %d " , product);

printf("\n The Average is : " ,average); /*Affected statement */

Fig- : forward slice w.r.t criterion variable "sum" labelled as affected statement

One advantage of static slicing over dynamic slicing is that it is easier and faster to identify a static slice [Samadzadeh and Wichaipanitch93]. The reason for this is that the computations for generating a static slice are done directly from the original source program. Static program slices tend to be large and imprecise when the programs to be debugged involve pointers and composite variables such as arrays, records, and unions. Dynamic slicing overcomes these shortcomings at the cost of lesser program understanding.

#include<stdio.h>

#include<string.h>

main ()

{

int number;

int fact;

int total;

int variable;

variable = 1;

fact = 1;

scanf("%d",&number);

if( number < 0)

{

printf{"error\n") ;

number = 0;

}

while(number < 10){

total = total + 1;

number++;

fact = fact * variable;

variable++;

}

printf("total is %d\t factorial is %d\n",total, fact);

}

Fig- : Sample program to be sliced

11. scanf("%d",&number);

12. if( number < 0)

13. {

14. printf ("error\n") ;

15. number = 0;

16. }

17. while(number < 10){

18. total = total + 1;

19. number++;

22. }

Fig- : static slice wrt criterion (23, total)

Dynamic Slicing

Static slicing was extended to dynamic slicing by Korel and Laski [Korel and Laski88]. According to Korel, a dynamic slice is a sub program that computes the values of the criterion variables in a specific execution of a program [Venkatesh91]. In contrast to Korel and Laski's approach, Agrawal [Agrawal, et a1.91] defined a dynamic slice as a collection of statements that affect the values of the criterion variable in a specific execution of a program. The slice may not be executable by itself. To clarify the two different definitions, consider the example shown in Figure 6 that calculates the sum and product of the first ten natural numbers. Korel and Laski's dynamic slice with respect to the variable total at S19 contains statement S16 because the statements which effect the control statements should be included in the slice. Figure 8 shows Korel and Laski's dynamic slice of the program in Figure 6.

#include<stdio.h> S1

#include<string.h> S2

main( ) S3

{

int number; S4

int total; S6

while(nurnber < 10) S14

{

total = total + 1; S15

number++; S16

Fig- : A dynamic slice of the sample program shown in Figure 6 based on Korel and Laski's definition of dynamic slicing

According to Agrawal and Horgan's definition of a dynamic slice, the dynamic slice with respect to variable total at S19 includes all of the statements that directly affect the criterion variable. All other variables, even the variables that are involved in the control flow of the original program are omitted. The program in Figure 9 shows the output generated based on Agrawal and Horgan's definition of dynamic slice of the program in Figure 6.

#include<stdio.h> S1

#include<string.h> S2

main () S3

{

int number; S4

int total; S6

while(number < 10) S10

{

total = total + 1; S14

Fig- : A dynamic slice of the sample program shown in Figure 6 based on Agrawal and Horgan's definition of dynamic slicing

Compared to Agrawal and Horgan's dynamic slice, Korel and Laski's dynamic slice is larger in size. But Korel and Laski's slices are definitely executable and never end in infinite loops.

Dynamic slicing consists of two activities [Venkatesh95]: the first activity is to obtain the trace about the execution of the program for a given input, the record activity is to construct slices for variables present in the program. The execution trace for a program can be obtained using source code or object code, which are called as source-level instrumentation or object-level instrumentation, respectively [Venkatesh95]. To obtain an execution trace, Agrawal and Horgan used source-level instrumentation over object-level instrumentation because of the following assumptions [Venkatesh95]: ease of portability to different platforms and simplicity of implementation.

Dynamic Slicing Procedures.

To facilitate better understanding of dynamic program slicing as proposed by Agrawal and Horgan, and by Korel and Laski, it is necessary that the following definitions be presented [Korel and Laski90] [Agrawal, etal.91][Wichaipanitch92].

Let the flow graph of the program P be a digraph (V, R, S, L) and C be the slicing criterion, where V represents a set of vertices, R represents a binary relation on program P which is referred to as the set of arcs, S ∊ V is a unique entry node, and L ∊ V is a unique exit node.

A vertex in V consists of one instruction, such as input/output statements, assignment statements, and control instructions (e.g., if-then-else or while), which are called test instructions.

An arc corresponds to a possible transfer of control flow from one instruction to another instruction.

A path from s to some vertex n E V is called a sequence. If there is input data that causes the path to be transferred during execution, then the path is feasible.

A trajectory is a feasible path that has been executed for some input. A trajectory with respect to an instruction and its position is represented as an ordered pair (an instruction, its position in the trajectory) so as to distinguish among multiple occurrences of the same instruction in a trajectory. If trajectory T for a program P is represented by (k,P ) for some instruction k in program P, then the pair can be replaced by kP and will be referred to as an action. An action kP is a test action if k is a test instruction.

If T represents the trajectory of a program on input x, then the dynamic slicing criterion of program P executed on x can be defined as C = (x, Iq , V) , where Iq is an action and v is a subset of variables in P.

if hist denotes the execution history of a program P on a test-case test and on a variable var, then the dynamic slice of P with respect to hist and var is the set of all statements in the hist whose execution has some effect on the value of var as observed at the end of the execution of the program. Unlike static slicing, dynamic slicing is defined with respect to the end of the execution history, i.e, if a dynamic slice with respect to some intermediate point in the execution is to be determined, then we should consider the partial execution history up to that point [Agrawal, et al.9I].

Methods for dynamic slicing

Algorithmic approach

Korel and Laski describe how dynamic slices can be computed. They formalize the execution history of a program as a trajectory consisting of a sequence of "occurrences" of statements and control predicates. Labels serve to distinguish between different occurrences of a statement in the execution history. As an example, Figure 14 shows the trajectory for the program of Figure 9 for input n = 2.A dynamic slicing criterion is specified as a triplet(x, where x denotes the input of the program, statement occurrence is the qth element of the trajectory, and V is a subset of the variables of the program. Korel and Laski define a dynamic slice with respect to a criterion(x, as an executable program S that is obtained from a program P by removing zero or more statements. Three restrictions are imposed on S. First, when executed with input x, the trajectory of S is identical to the trajectory of P from which all statement instances are removed that correspond to statements that do not occur in S. Second, identical values are computed by the program and its slice for all variables in V at the statement occurrence specified in the criterion. Third, it is required that statement i corresponding to statement instance specified in the slicing criterion occurs in S. Korel and Laski observe that their notion of a dynamic slice has the property that if a loop occurs in the slice, it is traversed the same number of times as in the original program. In order to compute dynamic slices, Korel and Laski introduce three dynamic flow concepts that formalize the dependences between occurrences of statements in a trajectory.

Definition- Use (DU) relation associates a use of a variable with its last definition. Note that in a trajectory, this definition is uniquely defined.

Test-Control (TC) relation associates the most recent occurrence of a control predicate with the statement occurrences in the trajectory that are control dependent upon it. This relation is defined in a syntax-directed manner, for structured program constructs only.

Symmetric Identity (IR) relation relates the occurrences of the same statement.

Dynamic slices are computed in an iterative way, by determining successive sets of directly and indirectly relevant statements. For a slicing criterion (x,, the initial approximation contains the last definitions of the variables in V in the trajectory before statement instance , as well as the test actions in the trajectory on which is control dependent. Approximation is defined as follows:

Where Is defined as below:

Ai+1 ={Xp |Xp Si , (Xp,Yt) Yt Si, p<q}

Example: The implementation of above iterative approach is shown below:

Consider the following program fragment for computing dynamic slicing.

1 read(n);

2 i := 1;

3 while (i < n) do

begin

4 if (i mod 2 = 0) then

5 x := 17

else

6 x := 18;

7 i := i + 1

end;

8 write(x);

read(n)

i := 1

i <= n /* (1 <= 2) /*

(i mod 2 = 0) /* (1 mod 2 = 1) /*

x := 18

i := i + 1

i <= n /* (2 <= 2) /*

(i mod 2 = 0) /* (2 mod 2 = 0) /*

x := 17

i := i + 1

i <= n /* (3 <= 2) /*

write(x)

Fig- Trajectory for the example program in above fig for input n = 2

Dynamic flow concepts for the above trajectory.

DU = { (, ) ,(, ) ,(,),(,),(, ),(, ),(, ),(,48), (, ),(,),(, )}

TC = { (, ) , (,),( ,),(, ) ,(, ) ( ) ,(, ) ,( , )}g

IR = { (, ), (, ,( ) ,( , ) ,() , (, ) ,(, ) ,(, ) , (, ), (}

The dynamic slice for the above trajectory of sample example program for the criterion (n=2,, {x})is computed. Since the final statement is not control dependent on any other statement, the initial approximation of the slice consists of the last definition of x: = { }.

Subsequent iterations will produce

{ , } , { , , ,, } , and

={, }. From this, it follows that

= {f , , , ,)}

Thus, the dynamic slice with respect to criterion (n= 2,, { x }) includes every statement except statement 5, corresponding to statement in the trajectory.

Program dependence graph and dynamic slice

Agrawal and Horgan designed an approach using pdg in which a separate node was created for each occurrence of a statement in the execution history, with outgoing dependence edges to only those statements (their specific occurrences) on which this statement occurrence is dependent. Every node in the new dependence graph will have at most one out-going edge for each variable used at the statement. The resulting graph is called the Dynamic Dependence Graph. A program will have different dynamic dependence graphs for different execution histories. Miller and Choi also defined a similar dynamic dependence graph.

Consider, for example, the program in Figure 11 and the test-case (N = 3, X = -4, 3, -2), which yields the execution history <1, 2, 31,41 , 51, 61, 81, 91, 101, 32, 42, 52, 71,82, 92, 102, 33, 43, 53, 62, 83,93, 103, 34>.

Figure 12 shows the Dynamic Dependence Graph for this execution history. The middle three rows of nodes in this figure correspond to the three iterations of the loop. As shown with the presence of node 8 in these rows. During the first and third iterations, node 8 depends on node 6 which corresponds to the dependence of statement 8 for the value of Y assigned by node 6, whereas during the second iteration, it depends on node 7 which corresponds to the dependence of statement 8 for the value of Y assigned by node 7. Once the Dynamic Dependence Graph is constructed for the given execution history, the dynamic slice for a var can be easily obtained , by first finding the node corresponding to the last definition of var in the execution history, and then finding all nodes in the graph reachable from that node. Figure 12 shows the effect of using this approach on the Dynamic Dependence Graph of the program in Figure 11 for the test-case (N = 3, X = -4, 3, -2), for variable Z at the end of the execution. Nodes in bold belong to the slice. Note that statement 6 belongs to the slice whereas statement 7 does not.

begin

S1: read(N);

S2: I = 1;

S3: while (I <= N)

do

S4: read(X);

S5: if (X < 0)

then

S6: Y = f1(X);

else

S7: Y = f2(X);

end if;

S8: Z = f3(Y);

S9: write(Z);

S10: I = I + 1;

end while;

end.

Fig- : Sample program

Fig- : Dynamic Dependence Graph for the program in fig 11 for the test case (N=3,X= -4,3,-2).Nodes in the bold give the dynamic slice for this test case w.r.t. var Z at the end of execution

Dynamic Slicing: (Use of Reduced Dynamic Dependence Graph approach )

The size of a Dynamic Dependence Graph (total number of nodes and edges) is, in general, unbounded, because the number of nodes in the graph is equal to the number of statements in the execution history, which, in general, may depend on values of run-time inputs. For example, for the program in Figure 11 the number of statements in its execution history, and hence the size of its Dynamic Dependence Graph, depends on the value read by variable N at statement 1. On the other hand, every program can have only a finite number of possible dynamic slices each slice being a subset of the (finite) program. Thus in order to be able to restrict the number of nodes in a Dynamic Dependence Graph so its size is not a function of the length of the corresponding execution history, an another approach is used in which instead of creating a new node for every occurrence of a statement in the execution history, a new node is created only if another node with the same transitive dependencies does not already exist, and this new graph is called the Reduced Dynamic Dependence Graph. To build it without having to save the entire execution history two tables are maintained, DefnNode and PredNode. DefnNode maps a variable name to the node in the graph that last assigned a value to that variable. PredNode maps a control predicate statement to the node that corresponds to the last occurrence of this predicate in the execution history thus far. Also, a set is associated of reachable Stmts, with each node in the graph. This set consists of all statements one or more of whose occurrences can be reached from the given node. Every time a statement, Si, gets executed, the set of nodes, D, that last assigned values to the variables used by Si, and the last occurrence, C, of the control predicate node of the statement is determined. If a node n, associated with Si already exists whose immediate descendents are the same as D/C, the new occurrence of Si is associated with n. Otherwise a new node is created with outgoing edges to all nodes in D/C. The DefnNode table entry for the variable assigned at Si, if any, is also updated to point to this node. Similarly, if the current statement is a control predicate, the corresponding entry in PredNode is updated to point to this node. Consider again the program in Figure 11, and test case (N = 3, X = -4, 3,-2), which yields the execution history <1, 2, 3 1 ,41 , 51, 61, 81, 91, 101, 32, 42, 52, 71,82, 92, 102, 33, 43, 53, 62, 83,93, 103, 34> Figure 13 shows the Reduced Dynamic Dependence Graph for this execution history. Every node in the graph is annotated with the set of all reachable statements from that node. Note that there is only one occurrence of node 10 in this graph, as opposed to three occurrences in the Dynamic Dependence Graph for the same program and the same test-case. Also the second occurrence of node 3 is merged with its immediate descendent node 10 because the reachable Statements set, {1, 2, 3, 10}, of the former was a subset of that of the latter. The third occurrence of node 3 in the execution history has node 1 and node 10 as immediate descendents. Since these immediate dependencies are also contained in the merged node (10, 3), the third occurrence of node 3 is also associated with this node.

Fig- : The Reduced Dynamic Dependence Graph for the Program in Figure 15 for the test-case (N = 3, X = -4, 3,-2), obtained using above approach. Each node is annotated with reachable Stmts, the set of all statements reachable from that node

2.2.3 Other types of slicing

2.2.3.1 Quasi static slicing

Quasi static slicing was the first attempt to define a hybrid slicing method ranging between static and dynamic slicing. The need for quasi static slicing arises from applications where the value of some input variables is fixed while the behaviour of the program must be analyzed when other input values vary. A quasi static slice preserves the behaviour of the original program with respect to the variables of the slicing criterion, on a subset of the possible program inputs. This subset is specified by the possible combination of values that the unconstrained input variables might assume. In the case all variables are unconstrained, the quasi static slice coincides with a static slice, while when the values of all input variables are fixed, and the slice is a dynamic slice. By specifying the values of some of the input variables, constant propagation and simplification can be used to reduce expressions to constants. In this way, the values of some program predicates can be evaluated, thus allowing the deletion of branches which are not executed on the particular partial input. Quasi static slices are computed on specialized programs.

Example:

(1) scanf("%d",&n);

(2) scanf("%d",&a);

(3) sum=0;

(4) prod=1;

(5) if (n>0)

(6) { sum += a;

(7) prod *=a;

(8) a += 2; }

(9) if (n<0)

(10) { sum -=a;

(11) prod *= a;

(12) a -= 2; }

(13) printf("\n sum is %d",sum) ;

(14) printf("\n sum is %d",prod) ;

Fig- : Sample Program to be sliced

As an example, let us consider the portion of a program in Figure 14. The quasi static slicing with the slicing criterion [21] (C={n},1,14,{sum}) is shown in the figure 15. The slicing criterion’s first argument refers to the variable whose value is to be fixed, the second argument gives the value of the variable in the first argument, the third argument shows the line number in the program to be sliced and the fourth argument indicates one of the variables in the program.

(1) scanf("%d",&n);

(2) scanf("%d",&a);

(3) sum=0;

(5) if (n>0)

(6) { sum += a;

(8) a += 2; }

(13) printf("\n sum is %d",sum) ;

Fig- : Quasi Static Slice with criterion (C={n},1,14,{sum}) for the program in fig14 .

2.2.3.2 Simultaneous dynamic slicing

A different form of slicing introduced by Hall computes slices with respect to a set of program executions. This slicing method is called simultaneous dynamic program slicing because it extends dynamic slicing and simultaneously applies it to a set of test cases, rather than just one test case. A simultaneous program slice on a set of test cases is constructed by following an iterative algorithm that, starts from an initial set of statements, incrementally builds the simultaneous dynamic slice, by computing at each iteration a larger dynamic slice. Simultaneous dynamic slicing has been used to locate functionality in code. The set of test cases can be seen as a kind of specification of the functionality to be identified.

A simultaneous dynamic slice of a program P on simultaneous dynamic slicing criterion C = ({I1,I2, … Im}, S, V) is any syntactically correct and executable program P0 that is obtained from P by deleting zero or more statements where Im refers to the Input, S is the statement in the program and V is the subset of variables in the program P.

Let us consider an example program which is shown in figure 9 that finds the positive sum, negative sum, positive product, negative product. By comparing the positive, negative sum and comparing the positive and negative product, this program displays the greatest from among the two respectively.

EXAMPLE:

(1). read n;

(2) read a;

(3) read chk;

(4) i=pprod=nprod=1;

(5) psum=nsum=0;

(6) while ( i<=n && a<=n) {

(7) if (a > 0) {

(8) psum += a;

(9) pprod *= a ;}

(10) else if (a<0) {

(11) nsum -= a;

(12) nprod *=(-a);}

(13) else if (chk) {

(14) if (psum>=nsum)

(15) psum = 0;

(16) else nsum = 0;

(17) if (pprod >= nprod)

(18) pprod = 1;

(19) else nprod = 1 ;}

(20) i++;

(21) read a;

(22) if (i<=n) {

(23) sum = 0;

(24) prod = 1;}

(25) else {

(26) if (psum>=nsum)

(27) sum = psum;

(28) else sum = nsum;

(29) if (pprod >= nprod)

(30) prod=pprod;

(31) else prod=nprod; }

(32) write sum;

(33) write prod;

Fig- : Sample Program to be sliced

A simultaneous program slice on a set of test cases is not simply given by the union of the dynamic slices on the component test cases. Indeed, simply the union of dynamic slices is unsound, in that the union does not maintain simultaneous correctness on all the inputs. An iterative algorithm is presented that, starting from an initial set of statements; incrementally construct the simultaneous dynamic slice, by computing the iteration a larger dynamic slice. This approach can be used in program comprehension for the isolation of the subset of the statements corresponding to particular program behaviour. It can be considered a refinement of the method proposed by Wilde et al. that consider the problem of locating functionalities in code as the identification of the relation existing between the ways the user and the programmer see the program. Simultaneous dynamic slicing can be considered as a refinement of methods for localizations of functions based on test cases, because it takes into account the data flow of the program and then allows the reduction of the set of selected statements.

2) read n;

(3) read a ;

(4) read chk;

(4) i = 1;

(5) psum = nsum = 0;

(6) while (i<=n && a<=n) {

(7) if (a > 0) {

(8) psum += a;

(10) else if (a<0) { }

(15) else if (chk) {

(16) if (psum>=nsum)

(17) psum = 0;

(20) i++;

(21) read a;

(22) if (i<=n) { }

(25) else {

(26) if (psum>=nsum)

(27) sum = psum;

(32) write sum;

Fig- : Simultaneous Dynamic Slice

2.2.3.3 Conditioned slicing

Conditioned slicing is a generalization of both static and dynamic slicing. The conditioned Slicing criterion augments the static criterion with a condition, which captures a set of possible initial states for which the slice and the original program must agree. A Conditioned slice is constructed with respect to a tuple,(V,n,Π) where v is a set of variable, n is a program point and Π is some condition. A statement may be removed from a program p to form a slice, s of p, iff it cannot affect the value of any variable in V when the next statement to be executed is at point n and the initial state satisfies Π.

The conditioned slicing criterion augments the static criterion with a condition, which captures a set of possible initial states for which the slice and the original program must agree. A conditioned slice consists of a subset of program statements which preserves the behavior of the original program with respect to a slicing criterion for a given set of program execution paths. The set of initial states of the program that characterize these executions is specified in terms of a first order logic formula on the input. A conditioned slice can be computed by first simplifying the program with respect to the condition on the input (i.e., discarding infeasible paths with respect to the input condition) and then computing a slice on the reduced program.

A conditioned slice consists of a subset of program statements which preserves the behavior of the original program with respect to a slicing criterion for a given set of execution paths. The set of initial states of the program that characterize these paths is specified in the form of a first order logic formula on the input variables. Given a program and the set of initial states, the conditioned slicing algorithm first use a symbolic executor to reduce the program by discarding infeasible paths according to these initial states. Then slicing will be performed on the reduced program. As infeasible paths a discarded, the slicing result is more precise then that of traditional slicing methods. Conditioned slicing [Canfora 1998], which is a more general form of quasi-static slicing and con-strained slicing with the input states characterized by a universally quantified, first order predicate logic formula. Actually, conditioned slicing is a framework of statement deleting based methods, i.e., the conditioned slicing criterion can be specified to obtain any form of slice. Conditioned slicing allows a better decomposition of the program giving human readers the possibility to analyze code fragments with respect to different perspectives.

Later, Harman et al. [Harman 2001] presented and formalized the pre/post conditioned slicing method, which combines forward and backward conditioning to provide a unified framework for conditioned program slicing. The pre/post conditioned slicing can be used to improve the analysis of programs in terms of pre- and post- conditions.

Fox et al. introduced backward conditioning and illustrated its usage in [Fox 2001]. Like forward conditioning (used in conditioned slicing), backward conditioning consists of specializing a program with respect to a condition inserted into the program. However, it addresses questions of the form ‘what parts of the program could potentially lead to the program arriving in a state satisfying condition c?’ which is different from forward conditioning.

main()

{

int a, test0, n, i;

int posprod, negprod, possum, negsum;

int sum, prod;

scanf("%d", &test0);

scanf("%d", &n);

i = 1;

posprod = 1;

negprod = 1;

possum = 0;

negsum = 0;

while (i <= n)

{

scanf("%d", &a);

if (a > 0) f

possum = possum + a;

posprod = posprod * a; }

else if (a < 0) {

negsum = negsum - a;

negprod = negprod * (-a); }

else if (test0) {

if (possum >= negsum)

possum= 0;

else negsum = 0;

if (posprod >= negprod)

posprod = 1;

else negprod = 1; }

i=i+1; }

if (possum >= negsum)

sum = possum;

else sum = negsum;

if (posprod >= negprod)

prod = posprod;

else prod = negprod;

}

Fig- : Example from Canfora et al.

main()

{

int a, test0, n, i;

int possum, negsum, sum;

scanf("%d", &test0);

scanf("%d", &n);

i = 1;

possum = 0;

negsum = 0;

while (i <= n)

{

scanf("%d", &a);

if (a > 0)

possum = possum + a;

else if (a < 0)

negsum = negsum – a;

else if (test0){

if (possum >= negsum)

possum = 0;

else negssum = 0;}

i=i+1;

}

if (possum >= negsum)

sum = possum;

else sum = negsum;

}

Fig- : Static Slice of above program w.r.t. ({sum};end)

main() {

int a, n, i, possum, negsum, sum;

scanf("%d", &n);

i = 1;

possum = 0;

negsum = 0;

while (i <= n)

scanf("%d", &a);

if (a > 0)

possum = possum + a;

i=i+1; g

if (possum >= negsum)

sum = possum; }

Fig- : Conditioned Slice of Figure 18 w.r.t. ({sum};end; a>0) where end is the end of the program and a>0 indicates that a



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now