Definition Of Testing And Test Cases

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

In this lecture we present software testing, test cases, debugging and review techniques. The lecture is organized as follows: Section two presents test cases. Section three presents debugging and section 4 presents review techniques.

Learning Outcomes

Be familiar with the types, advantages, and disadvantages of software testing.

Be familiar with the purpose and functionality of debugging.

Be familiar with review techniques.

19.0 Introduction

Test cases are the specific inputs and procedures that the programmer/software tester follows when he tests the software. They are the sequence of tests/test suites that are executed to test the software system.

Debugging is the process of identifying and rectifying errors and bugs in a software system in order to make it perform in the desired way.

Review techniques are methods used to detect defects in systems and products. Review techniques process maintains the quality of the product by reviewing its deliverables during development.

19.1 Test Cases

This section describes software testing and test cases, their definition, usage, objectives, design, types, and classification.

19.1.1 Definition of Testing and Test Cases

Testing is defined as the process of evaluating a system or its component(s) by manual or automated means to verify that it satisfies specified requirements, or to identify differences between expected and actual results. Testing aims to finding errors in the system under test. Testing is used to assess and evaluate the quality of the system.

Software testing examines a system or a software application under controlled conditions. It intentionally makes things go wrong (injects faults) when they should not. It can be described as a search for mistakes/errors in the system in order to rectify them and redevelop the system, and make it error free.

Objectives of software testing include the following:

Validate and verify that a software system or product meet the requirements that guided its design and development

Validate and verify that a software system or product works as expected; and

Validate and verify that a software system or product can be implemented with the same characteristics.

Test cases are the specific inputs and procedures that the programmer/software tester follows when he tests the software. They are the sequence of tests that are executed to test the software system. A test case is a collection of conditions that the software goes through in order to see if it is functioning the way it should be or not. Once the software has gone through these conditions, the software outputs/interactions are compared to the pre-defined user requirements and use cases. This determines if the software has passed or failed.

In order to be fully checked and deliver the desired requirements, the software will need to pass a number of test cases. Test cases are also known as test scripts, especially when they have been written. These test scripts are then put together into test suites.

In software engineering IEEE standard 610 defines test case as follows:

"A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement".

"Documentation specifying inputs, predicted results, and a set of execution conditions for a test item".

Objectives of test cases are to find bugs in the software system that is being tested and get these bugs fixed. Test cases show the presence of bugs, but never to show their absence. If the results delivered by the system are different from the expected ones in just one case, then this shows that the system is incorrect. In opposite, a correct behavior of the system on a finite number of cases does not guarantee the correctness in the general case.

Advantages of test cases include:

They can avoid or minimize unnecessary debugging.

Errors and bugs can be identified and fully rectified.

19.1.2 Documentation of Test Cases

A test can be described as a document that illustrates an input of some sort, something happening as a result, and an expected result in order to distinguish if a system or part of a system is functioning the way it should be.

Information about a test case could include the following:

ID

Description

Writer (of test case)

Associated requirement(s)

Category

Is the test case automated?

Predicted result(s) and actual result(s)

Extra fields to be completed once the test is finished in order to record if the software passed or failed

Additional comments

A formal written test case document consists of three parts:

Information: General information about the test case, such as author of test case, test case ID & name and a short description of the purpose of the test case.

Activity: The activity part of the test document contains the input data that is used for the testing, when this data is inserted and the various steps the tester needs to carry out in the duration of the testing.

Results: This includes the anticipated and actual results of the testing.

Test cases could be formal or informal.

Formal test cases: Formal text cases make use of a known input and an anticipated or expected output, which is identified before the test is carried out. For all of the pre-defined system requirements to be tested and met, a minimum of two test cases are carried out for each requirement; a positive test case and a negative one.

Informal test cases: Some systems or software being developed may not have any formal requirements. In this case, test cases are written on the basis of a similar working system or part of a system that is being tested.

The results of the test cases may be documented and presented using an Office package such as Microsoft Office which contains spreadsheets, databases, and word documents.

19.1.3 Designing Test Cases

In order for test cases to produce accurate results, the design of them has to be carried out by someone who has an in-depth knowledge of the system being tested in order to fully inspect.

A test case should include the following information:

Objective of test.

Pre-defined system requirements.

An explanation on how the tests are carried out.

Predicted results

19.2 Static and Dynamic Software Testing

Software testing is classified base on the time of implementing testing into static testing and dynamic testing.

Static testing: Static testing is usually done before completing coding. Static testing can be omitted in practice.

Static software testing approaches include:

Reviews

walkthroughs, or

inspections

Dynamic testing: Dynamic testing involves testing pieces of code under a conditioned test case(s). The whole software (code) can be tested but it is most common that small parts of the system are tested before the full system is fully developed.

19.3 Software Testing Types

Software testing could be done using different techniques or methods. In the following we present some of these methods.

Black box testing: Black box testing is concerned with the functionality of the system or part of system that is being tested. It tests if the functionality and results produced meet the requirements. This type of testing is not concerned with inside of the system, in other words, the coding/programming.

White box testing: Unlike black box testing, white box testing focuses on the coding of the system and requires a good understanding of the programming that has been used. The testing is conducted on the actual code.

Unit testing: In unit testing the programmer tests specific functions or code modules. This is similar to white box testing as the tester needs a good knowledge of programming and code in order to carry out the testing. This testing is difficult unless the application has a well-designed architecture with tight code.

Incremental integration testing: Incremental integration testing is a continuous testing of system as new functionality is added. This testing is done by programmers or by testers.

Integration testing: Integration testing is the process of combined parts of a system to determine if they function together correctly. This testing can be any type of system which has several independent sub systems.

Acceptance testing: This type of testing is carried out at the final stage of the development cycle in order to test the acceptance levels of the end user (in relation to the system).

Usability testing: Usability testing determines how user friendly the system is. It is targeted at the end user of the system and is carried out using interviews, surveys and questionnaires.

Install / Uninstall testing: Install / Uninstall testing conducted by installing/uninstalling the full, partial, or upgrade of the system.

Security testing: This type of testing is concerned with how the system reacts to illegitimate access to the system. Will the system prohibit/deny this access or will it not recognize it being an unauthorized access?

Comparison testing: Comparison testing is used to compare the performance of the software system to competing products. This testing reveals weaknesses and strengths of the tested system.

Alpha testing: This type of testing is carried out once the software being developed is nearly complete. The testing is carried out by end users and not the developers/programmers of the software. It is common that alterations occur at this stage as a result of alpha testing.

Beta testing: Once alpha testing is carried out and the alterations to the system are made and the errors/bugs are identified and resolved, a second round of testing is carried out, also known as beta testing. Similar to alpha testing, the testing is carried out by end users, not by the developers/programmers.

19.4 Debugging

In this section we discuss debugging, we first define debugging, then we discuss debugging types. Later, we present debugging tools. Finally, we present debugging techniques.

19.4.1 Definition of Debugging

In computers, debugging is the process of identifying and rectifying errors and bugs in a software system in order to make it perform in the desired way. Debugging can be considered as a tricky and difficult process, especially in sub-systems that are heavily interlinked as one alteration sub-system may have an impact on another sub-system and so on.

Testing and debugging go together since testing finds errors while debugging localizes and repairs them. Testing and debugging form the testing/debugging cycle where test is done, then debug is done, then the cycle is repeated. Any debugging should be followed by a reapplication of all relevant tests. This avoids and minimizes the introduction of new bugs when debugging. Finally, testing and debugging need not be done by the same people.

To debug a program, the debugger/tester starts with a problem, isolates the source of the problem, and then fixes it. When a program is debugged, this means that the bugs are worked out of the program. They are fixed so that they no longer exist in the product.

Debugging is a necessary process in almost any new software or hardware development process, whether it is commercial product or an enterprise or personal application program. For complex products, debugging is done as the result of the unit level tests for the smallest units of a system, as the result of the component level tests, and as the result of the system level test.

19.4.2 Types of Bugs

Types of bugs of software systems (applications/programs) bugs:

Compile time bugs: Compile time bugs include syntax, spelling, and static type mismatch. Compile time are usually caught with compiler.

Design bugs: Design bugs are flawed algorithm. These bugs generate incorrect outputs.

Program logic bugs: Program logic bugs include if/else, loop termination, select case, etc bugs. These bugs generate incorrect outputs.

Memory nonsense bugs: Memory nonsense bugs include null pointers, array bounds, bad types, and leaks. These bugs generate runtime exceptions.

Interface errors between modules, threads, and programs: Interface errors between modules, threads, and programs occur in particular, with shared resources such as sockets, files, memory, etc. These errors generate runtime Exceptions.

Off-nominal conditions: Off-nominal conditions are a result of failure of some part of software of underlying machinery (network, etc). These bugs lead to incomplete functionality.

Deadlocks bugs: Deadlocks bugs occur when multiple processes fight for a resource. These bugs generate freeze ups, and never ending processes.

19.4.3 Debugging Tools

In order to conduct debugging, debugging tools which are called debuggers are used. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, re-start it, set breakpoints, and change values in memory. The term debugger can also refer to the person who is doing the debugging. Debuggers help identify errors in program code at various development stages. Some programming language packages include a facility for checking the code for errors as it is being written.

Debugging ranges in complexity from fixing simple errors to performing more complex tasks of data collection, analysis, and scheduling updates. The difficulty of software debugging varies greatly with the complexity of the system, the programming language used, and the tools used for debugging the system.

19.4.4 Debugging Techniques

Techniques used in debugging include the following:

Print (or tracing) debugging: This type of debugging focuses on the execution of the internal processes of the system. It is carried out using trace statements which are produced instantaneously as a result of the debugging.

Remote debugging: This type of debugging is concerned with the debugging of a system that is not running on the same system as the debugger. In order to carry out the debugging, the debugger connects to the remote system and takes control of the execution of the system under that is under the debugging.

Post-mortem debugging: This type of debugging is carried out on a system after it has ‘crashed’.

Delta Debugging: Delta Debugging is the process of automating test case simplification.

19.4.5 Review Techniques

Review techniques are methods used to detect defects in systems and products. Review techniques process maintains the quality of the product by reviewing its deliverables during development. Review is used to software project deliverables like plans, analysis and design specifications, source code or test suites. Review techniques are essential in system development since a single undetected error or defect during the software development process could have disastrous consequences during a business operation. Review techniques and technologies provide an understanding of the critical factors affecting software review performance and gives practical guidelines for software reviews. Software reviews aim to detect and classify defects, bugs, errors and none conformities with standards and expected outcomes.

It is a common practice in most engineering fields that projects should be reviewed by somebody else besides the developers of the project.

Reviews can be informal (walkthroughs) or formal (steps and roles well defined). Software inspections, for instance, is a formal review technique carried out by several peers with specific roles, which is getting widespread in the software industry.

Peer review is common practice where developers review each other’s software code before releasing software. Peer review identifies bugs, encourages collaboration, and keeps code more maintainable.

Software review is used in software engineering and it is defined as a non-execution-based technique for reviewing software products for defects, and deviations from development standards.

Objectives of software review include the following:

Software review process is used for detecting and eliminating defects.

Advantages of software review include the following:

Software review is considered the most cost effective technique in cost saving, quality, and productivity improvements in software engineering

The earlier defects are detected in development, the easier and less costly they are to remove and correct.

Detecting defects early in the software development life cycle that are difficult or impossible to detect in later stages.

Software review improves the learning and communication in the software team, since software development is essentially a human activity.

The IEEE Computer Society’s standard for software reviews describes a review process applicable to software systems. It considers seven steps [IEEE 1028 standard, 1998]:

Introduction: Identifies the purpose of the review and presents a general overview of the review processes.

Responsibilities: Establishes the individual roles/responsibilities that are required in order to carry out the review.

Inputs: Describes the requirements for inputs needed by the review.

Entry Criteria: This illustrates the state in which the system has to be in before the review is carried out.

Procedures: This can be considered as a plan of the review, what is needed to be done before, during, and after the review takes place.

Exit Criteria: This illustrates the state in which the system has to be in before the review is concluded.

Output: Identifies the minimum level of output produced by the review.

Summary

In this lecture we present software testing, test cases, debugging and review techniques. The lecture is organized as follows: Section two presents test cases. Section three presents debugging and section 4 presents review techniques.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now