The History Of Software Reliability

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Advanced Topics in Reliability

Software Reliability is the probability of failure-free software operation for a specified period of time in a specified environment. Software Reliability is also an important factor affecting system reliability. It differs from hardware reliability in that it reflects the design perfection, rather than manufacturing perfection. The high complexity of software is the major contributing factor of Software Reliability problems. Software Reliability is not a function of time - although researchers have come up with models relating the two. The modeling technique for Software Reliability is reaching its prosperity, but before using the technique, we must carefully select the appropriate model that can best suit our case. Measurement in software is still in its infancy. No good quantitative methods have been developed to represent Software Reliability without excessive limitations. Various approaches can be used to improve the reliability of software, however, it is hard to balance development time and budget with software reliability. [1]

1.1 Software

Software is any set of machine-readable instructions (most often in the form of a computer program) that directs a computer's processor to perform specific operations. Software is a general term. It can refer to all computer instructions in general or to any specific set of computer instructions. [2]

On most computer platforms, software can be grouped into two broad categories: System software is the basic software needed for a computer to operate (most notably the operating system). Application software is all the software that uses the computer system to perform useful work beyond the operation of the computer itself.

Software refers to one or more computer programs and data held in the storage of the computer. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. Program software performs the function of the program it implements, either by directly providing instructions to the digital electronics or by serving as input to another piece of software. Software is also sometimes used in a more narrow sense, meaning application software only. Sometimes the term includes data that has not traditionally been associated with computers, such as film, tapes, and records.[3]

1.2 Reliability

Reliability is the probability that an item will perform its specified mission satisfactorily for the stated time when used according to the specified conditions.[4]

Reliability engineering is an engineering field that deals with the study, evaluation, and life-cycle management of reliability: the ability of a system or component to perform its required functions under stated conditions for a specified period of time. Reliability engineering is a sub-discipline within systems engineering. Reliability is theoretically defined as the probability of failure, the frequency of failures, or in terms of availability, a probability derived from reliability and maintainability. Maintainability and maintenance may be defined as a part of reliability engineering.

Reliability engineering for complex systems requires a different, more elaborate systems approach than for non-complex systems. Reliability analysis has important links with functional analysis, requirements specification, hardware & software design, manufacturing, testing, maintenance, transport, storage, spare parts, operations research, human factors, technical documentation and work skill training. Effective reliability engineering requires experience, broad engineering skills and knowledge from many different fields of engineering.[5]

1.3 Software Reliability

Software reliability is a special aspect of reliability engineering. System reliability, by definition, includes all parts of the system, including hardware, software, supporting infrastructure (including critical external interfaces), operators and procedures. Traditionally, reliability engineering focuses on critical hardware parts of the system. Since the widespread use of digital integrated circuit technology, software has become an increasingly critical part of most electronics and, hence, nearly all present day systems.[5]

There are significant differences, however, in how software and hardware behave. Most hardware unreliability is the result of a component or material failure that results in the system not performing its intended function. Repairing or replacing the hardware component restores the system to its original operating state. However, software does not fail in the same sense that hardware fails. Instead, software unreliability is the result of unanticipated results of software operations. Even relatively small software programs can have astronomically large combinations of inputs and states that are infeasible to exhaustively test. Restoring software to its original state only works until the same combination of inputs and states results in the same unintended result. Software reliability engineering must take this into account.

Despite this difference in the source of failure between software and hardware, several software reliability models based on statistics have been proposed to quantify what we experience with software: the longer software is run, the higher the probability that it will eventually be used in an untested manner and exhibit a latent defect that results in a failure.[5]

As with hardware, software reliability depends on good requirements, design and implementation. Software reliability engineering relies heavily on a disciplined software engineering process to anticipate and design against unintended consequences. There is more overlap between software quality engineering and software reliability engineering than between hardware quality and reliability. A good software development plan is a key aspect of the software reliability program. The software development plan describes the design and coding standards, peer reviews, unit tests, configuration management, software metrics and software models to be used during software development.[5]

2 Historical Background

Modern complex computer-controlled systems frequently fail due to latent software design errors encountered as the software processes various input combinations during operation. Probabilistic models for such errors and their frequency of occurrence lead to software reliability functions and mean time between software error metrics. Progress in this field was made since 1970 and focused on the successes which have been achieved with existing models. Future progress is seen as depending heavily on establishment of a database of software reliability information. This is necessary so that early, more accurate, and widespread use can be made of the proven prediction models which now exist. [6]

3. Attributes of Software Reliability

Software reliability is centered on the attribute: reliability. Software reliability is defined as the probability of failure-free software operation for a specified period of time in a specified environment (according to ANSI, 1991). We notice the three major ingredients in the definition of software reliability: failure, time, and operational environment. We now define these terms and other related software reliability terminology. [1][7]

Software system

A software system is an interacting set of software subsystems that is embedded in computing environment that provides input to software system and accepts service (output) from the software. A software subsystem may consist of many modules or programs.

Software Service

Expected service or behaviour of a software system is time dependent sequence of output that agrees with initial specification from which the software implementation has been derived.

Software Failure

A failure occurs when the user perceives that program ceases to deliver the expected service. It normally occurs during testing and operation.

Failure Severity Level

A failure may be classified into different severity levels; major, medium and minor depending on their impact to the system service.

Outage

An outage is special case of failure that is defined as a loss or degradation of service to a customer for a period of time (called outage duration). In general, outage can be caused hardware or software failure, human errors and environment variables (lighting, power failure, fire etc). A failure resulting in the loss of entire system is called system outage.

Software Fault or Bug

The internal error or bug in software program that may cause failure of software. Most of these bugs are detected and removed during testing.

Execution Time

As we know, reliability quantities are defined in term of time, in SRE study, CPU execution time is taken into consideration instead of calendar time or clock time.

Failure Rate Function

The probability that a failure per unit time occurs in the interval (t, t+∆t) When a time basis is determined, failures can be expressed in several ways: the cumulative failure function, the failure intensity function, the failure rate function, and the mean-time-to-failure function. The cumulative failure function (also called the mean-value function) denotes the expected cumulative failures associated with each point of time. The failure intensity function represents the rate of change of the cumulative failure function. The failure rate function (or called the rate of occurrence of failures) is defined as the probability that a failure per unit time occurs in the interval [t , t + Dt], given that a failure has not occurred before t.

MTTF

MTTF (Mean Time To Failure) function represents the expected time that next failure will be occurred.

MTTR

MTTR (Mean Time To Repair) represents the expected time until a system will be repaired after a failure.

Availability

Availability if probability that a system is available when needed and can be expressed mathematically as below:

xm3

Operational Profile

The operational profile of a system is defined as the set of operations that the software can execute along with the probability with which they will occur.

Failure Data Collection

Two types of failure data can be collected for the purpose of reliability measurement.

i) Failure Count data, that is the number of failures detected per unit of time.

ii) Time between failure data i.e., the interval between two consecutive failures.

Reliability Estimation

The activity determines current software reliability by applying statistically interface techniques to failure data obtained during system test or during system operation.

Reliability Prediction

The activity determines future software reliability based upon available software metrics and measures.

Failures

A failure occurs when the user perceives that a software program ceases to deliver the expected service. The user may choose to identify several severity levels of failures, such as catastrophic, major, and minor, depending on their impacts to the system service and the consequences that the loss of a particular service can cause, such as dollar cost, human life, and property damage.

Faults

A fault is uncovered when either a failure of the program occurs, or an internal error (e.g., an incorrect state) is detected within the program. The cause of the failure or the internal error is said to be a fault. In most cases the fault can be identified and removed; in other cases it remains a hypothesis that cannot be adequately verified (e.g., timing faults in distributed systems).

In summary, a software failure is an incorrect result with to the specification or unexpected software behaviour perceived by the user at the boundary of the software system, while a software fault is the identified or hypothesized cause of the software failure.

Defects

When the distinction between fault and failure is not critical, "defect" can be used as a generic term to refer to either a fault (cause) or a failure (effect).

Errors

The term "error" has two different meanings, that is, a discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. Errors occur when some part of the computer software produces an undesired state. Examples include exceptional conditions raised by the activation of existing software faults, and incorrect computer status due to an unexpected external interference.

The second is a human action that results in software containing a fault. Examples include omission or misinterpretation of user requirements in a software specification, and incorrect translation or omission of a requirement in the design specification. However, this is not a preferred usage, and the term "mistake" is used instead to avoid the confusion.

Time

Reliability quantities are defined with respect to time, although it is possible to define them with respect to other bases such as program runs of number of transactions.

Operational Profile

The operational profile of a system is defined as the set of operations that the software can execute along with the probability with which they will occur. An operation is a group of runs that typically involve similar processing.

Software Failure Mechanisms

Software failures may be due to errors, ambiguities, oversights or misinterpretation of the specification that the software is supposed to satisfy, carelessness or incompetence in writing code, inadequate testing, incorrect or unexpected usage of the software or other unforeseen problems.[7].

While it is tempting to draw an analogy between Software Reliability and Hardware Reliability, software and hardware have basic differences that make them different in failure mechanisms. Hardware faults are mostly physical faults, while software faults are design faults, which are harder to visualize, classify, detect, and correct. Design faults are closely related to fuzzy human factors and the design process, which we don't have a solid understanding. In hardware, design faults may also exist, but physical faults usually dominate. In software, we can hardly find a strict corresponding counterpart for "manufacturing" as hardware manufacturing process, if the simple action of uploading software modules into place does not count. Therefore, the quality of software will not change once it is uploaded into the storage and start running. Trying to achieve higher reliability by simply duplicating the same software modules will not work, because design faults cannot be masked off by voting.

A partial list of the distinct characteristics of software compared to hardware is listed below [7]

Failure Cause

Failure cause software defects are mainly design defects.

Wear-Out

With wear out cause software does not have energy related wear-out phase. Errors can occur without warning.

Repairable System Concept

In the Repairable system concept the periodic restarts can help fix software problems.

Time Dependency and Life Cycle

The time dependency and life cycle software reliability is not a function of operational time.

Environmental Factors

Environmental factors do not affect Software reliability, except it might affect program inputs.

Reliability Prediction

Software reliability cannot be predicted from any physical basis, since it depends completely on human factors in design.

Redundancy

Redundancy cannot improve Software reliability if identical software components are used.

Interfaces

Software interfaces are purely conceptual other than visual.

Failure Rate Motivators

Failure Rate Motivators are usually not predictable from analyses of separate statements.

Built with standard components

Well-understood and extensively-tested standard parts will help improve maintainability and reliability. But in software industry, we have not observed this trend. Code reuse has been around for some time, but to a very limited extent. Strictly speaking there are no standard parts for software, except some standardized logic structures.

++++++++++++++++++++++++++++++++++++++++++++++++++

The Bathtub Curve for Software Reliability

Over time, hardware exhibits the failure characteristics shown in the following figure, known as the bathtub curve. Period A, B and C stands for burn-in phase, useful life phase and end-of-life phase. A detailed discussion about the curve can be found in the topic Traditional Reliability. [1]

 

http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/Image1.gif

Bathtub Curve for Hardware Reliability

Software reliability, however, does not show the same characteristics similar as hardware. A possible curve is shown in the following figure, that is if we projected software reliability on the same axes.[7][1]

There are two major differences between hardware and software curves.

One difference is that in the last phase, software does not have an increasing failure rate as hardware does. In this phase, software is approaching obsolescence; there is no motivation for any upgrades or changes to the software. Therefore, the failure rate will not change.

The second difference is that in the useful-life phase, software will experience a drastic increase in failure rate each time an upgrade is made. The failure rate levels off gradually, partly because of the defects found and fixed after the upgrades.

 

http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/Image2.gif

Revised Bathtub Curve for Software Reliability

The upgrades in the second figure imply featured upgrades, not upgrades for reliability. For feature upgrades, the complexity of software is likely to be increased, since the functionality of software is enhanced. Even bug fixes may be a reason for more software failures, if the bug fix induces other defects into software. For reliability upgrades, it is possible to incur a drop in software failure rate, if the goal of the upgrade is enhancing software reliability, such as a redesign or reimplementation of some modules using better engineering approaches, such as clean-room method. [1]

A proof can be found in the result from Ballista project, robustness testing of off-the-shelf software Components. The following figure shows the testing results of fifteen POSIX compliant operating systems. From the graph we see that for QNX and HP-UX, robustness failure rate increases after the upgrade. But for SunOS, IRIX and Digital UNIX, robustness failure rate drops when the version numbers go up. Since software robustness is one aspect of software reliability, this result indicates that the upgrade of those systems shown in here should have incorporated reliability upgrades.[1]

http://www.ece.cmu.edu/~koopman/des_s99/sw_reliability/Figure3.GIF

Software Reliability Models

A proliferation of software reliability models have emerged as people try to understand the characteristics of how and why software fails, and try to quantify software reliability. Over 200 models have been developed since the early 1970s, but how to quantify software reliability still remains largely unsolved.[1]

As many models as there are and many more emerging, none of the models can capture a satisfying amount of the complexity of software; constraints and assumptions have to be made for the quantifying process. Therefore, there is no single model that can be used in all situations. No model is complete or even representative. One model may work well for a set of certain software, but may be completely off track for other kinds of problems.

Most software models contain the following parts:[7]

i) Assumptions,

ii) Factors, and

iii) Mathematical function that relates the reliability with the factors.

The mathematical function is usually higher order exponential or logarithmic.

Software modeling techniques can be divided into two subcategories: [1]

A) Prediction modeling and

B) Estimation modeling. 

Both kinds of modeling techniques are based on observing and accumulating failure data and analyzing with statistical inference.

The major difference of the two models are shown in following Table [1]

ISSUES

PREDICTION MODELS

ESTIMATION MODELS

DATA REFERENCE

Uses historical data

Uses data from the current software development effort

WHEN USED IN DEVELOPMENT CYCLE

Usually made prior to development or test phases; can be used as early as concept phase

Usually made later in life cycle(after some data have been collected); not typically used in concept or development phases

TIME FRAME

Predict reliability at some future time

Estimate reliability at either present or some future time

Difference between software reliability prediction models and software reliability estimation models

Representative prediction models include Musa's Execution Time Model, Putnam's Model. and Rome Laboratory models TR-92-51 and TR-92-15, etc. Using prediction models, software reliability can be predicted early in the development phase and enhancements can be initiated to improve the reliability.[1]

Representative estimation models include exponential distribution models, Weibull distribution model, Thompson and Chelson's model, etc. Exponential models and Weibull distribution model are usually named as classical fault count/fault rate estimation models, while Thompson and Chelson's model belong to Bayesian fault rate estimation models.[1]

The field has matured to the point that software models can be applied in practical situations and give meaningful results and, second, that there is no one model that is best in all situations. [7]

Because of the complexity of software, any model has to have extra assumptions. Only limited factors can be put into consideration. Most software reliability models ignore the software development process and focus on the results -- the observed faults and/or failures. By doing so, complexity is reduced and abstraction is achieved, however, the models tend to specialize to be applied to only a portion of the situations and a certain class of the problems. We have to carefully choose the right model that suits our specific case. Furthermore, the modeling results cannot be blindly believed and applied.[1]

Software Reliability Metrics

Measurement is commonplace in other engineering field, but not in software engineering. Though frustrating, the quest of quantifying software reliability has never ceased. Until now, we still have no good way of measuring software reliability. [1]

Measuring software reliability remains a difficult problem because we do not have a good understanding of the nature of software. There is no clear definition to what aspects are related to software reliability. We cannot find a suitable way to measure software reliability, and most of the aspects related to software reliability. Even the most obvious product metrics such as software size have not uniform definition.

It is tempting to measure something related to reliability to reflect the characteristics, if we cannot measure reliability directly.

The current practices of software reliability measurement can be divided into four categories:[7] [1]

1. Product Metrics

Software size is thought to be reflective of complexity, development effort and reliability. Lines Of Code (LOC), or Lines Of Codes in thousands (KLOC), is an intuitive initial approach to measuring software size. But there is not a standard way of counting. Typically, source code is used (SLOC, KSLOC) and comments and other non-executable statements are not counted. This method cannot faithfully compare software not written in the same language. The advent of new technologies of code reuses and code generation technique also cast doubt on this simple method.

Function point metric is a method of measuring the functionality of a proposed software development based upon a count of inputs, outputs, master files, inquires, and interfaces. The method can be used to estimate the size of a software system as soon as these functions can be identified. It is a measure of the functional complexity of the program. It measures the functionality delivered to the user and is independent of the programming language. It is used primarily for business systems; it is not proven in scientific or real-time applications.

Complexity is directly related to software reliability, so representing complexity is important. Complexity-oriented metrics is a method of determining the complexity of a program’s control structure, by simplifies the code into a graphical representation. Representative metric is McCabe's Complexity Metric.

Test coverage metrics are a way of estimating fault and reliability by performing tests on software products, based on the assumption that software reliability is a function of the portion of software that has been successfully verified or tested. Detailed discussion about various software testing methods can be found in topic Software Testing.

2. Project Management Metrics

Researchers have realized that good management can result in better products. Research has demonstrated that a relationship exists between the development process and the ability to complete projects on time and within the desired quality objectives. Costs increase when developers use inadequate processes. Higher reliability can be achieved by using better development process, risk management process, configuration management process, etc.

3. Process Metrics

Based on the assumption that the quality of the product is a direct function of the process, process metrics can be used to estimate, monitor and improve the reliability and quality of software. ISO-9000 certification, or "quality management standards", is the generic reference for a family of standards developed by the International Standards Organization (ISO).

4. Fault and Failure Metrics

The goal of collecting fault and failure metrics is to be able to determine when the software is approaching failure-free execution. Minimally, both the number of faults found during testing (i.e., before delivery) and the failures (or other problems) reported by users after delivery are collected, summarized and analyzed to achieve this goal. Test strategy is highly relative to the effectiveness of fault metrics, because if the testing scenario does not cover the full functionality of the software, the software may pass all tests and yet be prone to failure once delivered. Usually, failure metrics are based upon customer information regarding failures found after release of the software. The failure data collected is therefore used to calculate failure density, Mean Time Between Failures (MTBF) or other parameters to measure or predict software reliability.

Software Reliability Improvement Techniques 

Good engineering methods can largely improve software reliability.

Before the deployment of software products, testing, verification and validation are necessary steps. Software testing is heavily used to trigger, locate and remove software defects. Software testing is still in its infant stage; testing is crafted to suit specific needs in various software development projects in an ad-hoc manner. Various analysis tools such as trend analysis, fault-tree analysis, Orthogonal Defect classification and formal methods, etc, can also be used to minimize the possibility of defect occurrence after release and therefore improve software reliability.

After deployment of the software product, field data can be gathered and analyzed to study the behaviour of software defects. Fault tolerance or fault/failure forecasting techniques will be helpful techniques and guide rules to minimize fault occurrence or impact of the fault on the system.

+++++++++++++++++++++++++++++++++++++++++++

Relationship to Other Topics

Software Reliability is a part of software quality. It relates to many areas where software quality is concerned.

1 Traditional/Hardware Reliability

The initial quest in software reliability study is based on an analogy of traditional and hardware reliability. Many of the concepts and analytical methods that are used in traditional reliability can be used to assess and improve software reliability too. However, software reliability focuses on design perfection rather than manufacturing perfection, as traditional/hardware reliability does.

2 Software Fault Tolerance

Software fault tolerance is a necessary part of a system with high reliability. It is a way of handling unknown and unpredictable software (and hardware) failures (faults) [7] by providing a set of functionally equivalent software modules developed by diverse and independent production teams. The assumption is the design diversity of software, which itself is difficult to achieve.

3 Software Testing

Software testing serves as a way to measure and improve software reliability. It plays an important role in the design, implementation, validation and release phases. It is not a mature field. Advance in this field will have great impact on software industry.

4 Social & Legal Concerns

As software permeates to every corner of our daily life, software related problems and the quality of software products can cause serious problems, such as the Therac-25 accident. The defects in software are significantly different than those in hardware and other components of the system: they are usually design defects, and a lot of them are related to problems in specification. The unfeasibility of completely testing a software module complicates the problem because bug-free software cannot be guaranteed for a moderately complex piece of software. No matter how hard we try, defect-free software product cannot be achieved. Losses caused by software defects cause more and more social and legal concerns. Guaranteeing no known bugs is certainly not a good-enough approach to the problem.

++++++++++++++++++++++++++++++++++++++++++++++++

Software Reliability Testing 

Software reliability testing is a field of testing which deals with checking the ability of software to function under given environmental conditions for a particular amount of time, taking into account the precision of the software. In software reliability testing, problems are discovered regarding software design and functionality and assurance is given that the system meets all requirements.[8]

Objectives of Reliability Testing

The main objective of the reliability testing is to test software performance under given conditions without any type of corrective measure using known fixed procedures considering its specifications.[8]

Purpose

To find perceptual structure of repeating failures.

To find the number of failures occurring in a specified amount of time.

To find the mean life of the software.

To discover the main cause of failure.

Checking the performance of different units of software after taking preventive actions.

Objectives

Behaviour of the software should be defined in given conditions.

The objective should be feasible.

Time constraints should be provided.

Importance of Reliability Testing

The application of computer software has crossed into many different fields, with software being an essential part of industrial, commercial and military systems. Because of its many applications in safety critical systems, software reliability is now an important research area. Although software engineering is becoming the fastest developing technology of the last century, there is no complete, scientific, quantitative measure to assess them. Software reliability testing is being used as a tool to help assess these software engineering technologies.

To improve the performance of software product and software development process, a thorough assessment of reliability is required. Testing software reliability is important as it is of great use for software managers and practitioners.[8]

Types of Reliability Testing

Software Reliability Testing requires checking features provided by the software, the load that software can handle, and regression testing.[8]

Feature Test

Feature testing checks the features provided by the software and are conducted in the following steps:

Each operation in the software is executed once.

Interaction between the two operations is reduced and

Each operation is checked for its proper execution.

The feature test is followed by the load test.

Load Test

This test is conducted to check the performance of the software under maximum work load. Any software performs better up to some amount of workload, after which the response time of the software starts degrading. For example, a web site can be tested to see how many simultaneous users it can support without performance degradation. This testing mainly helps for Databases and Application servers. Load testing also requires software performance testing, which checks how well some software performs under workload.[8]

Regression test

Regression testing is used to check if any new bugs have been introduced through previous bug fixes. Regression testing is conducted after every change or update in the software features. This testing is periodic, depending on the length and features of the software.[8]

Test Planning

Reliability testing is more costly compared to other types of testing. Thus while doing reliability testing, proper management and planning is required. This plan includes testing process to be implemented, data about its environment, test schedule, test points etc.[8]

Steps for Planning

Find main aim of testing.

Know the testing requirements.

Look over existing data and check for the requirements.

Consider the priorities when performing necessary tests.

Utilize time constraints, available money and manpower effectively.

Determine specifications of the test.

Allot different responsibilities to testing teams.

Decide policies for providing testing reports.

Have control over testing procedures throughout the testing process

Problems in Designing Test Cases

Some common problems that occur when designing test cases include:

Test cases can be designed simply by selecting only valid input values for each field in the software. When changes are made in a particular module, the previous values may not actually test the new features introduced after the older version of software.

There may be some critical runs in the software which are not handled by any existing test case. Therefore, it is necessary to ensure that all possible types of test cases are considered through careful test case selection.

Reliability Enhancement through Testing

Studies during development and design of software help for reliability of product. Reliability testing is basically performed to eliminate the failure mode of the software. Life testing of the product should always be done after the design part is finished or at least the complete design is finalized. Failure analysis and design improvement is achieved through following testing.[8]

Reliability growth testing

[9]This testing is used to check new prototypes of the software which are initially supposed to fail frequently. The causes of failure are detected and actions are taken to reduce defects. Suppose T is total accumulated time for prototype. n(T) is number of failure from start to time T. The graph drawn for n(T)/T is a straight line. This graph is called Duane Plot. One can get how much reliability can be gained after all other cycles of test and fix it.

\begin{alignat}{5} ln\left[ \frac {n\left( T\right)} {T}\right] = -\alpha ln\left( T\right) + b ; \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ..... Eq:1 \end{alignat}

solving eq.1 for n(T),

\begin{alignat}{5} n \left( T\right) = KT^{1-\alpha} ; \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ...... Eq:2 \end{alignat}

where K is e^b. If the value of alpha in the equation is zero the reliability cannot be improved as expected for given number of failure. For alpha greater than zero, cumulative time T increases. This explains that number of the failures doesn't depend on test lengths.

Designing test cases for current release

If we are adding new features to the current version of software, then writing a test case for that operation is done differently.

First plan how many new test cases are to be written for current version.

If the new feature is part of any existing feature, then share the test cases of new and existing features among them.

Finally combine all test cases from current version and previous one and record all the results.

There is a predefined rule to calculate count of new test cases for the software. if N is the probability of occurrence of new operations for new release of the software, R is the probability of occurrence of used operations in the current release and T is the number of all previously used test cases then

\begin{alignat}{5} New Test cases_{(current release)} = \left( \frac {N} {R}\right) * T \end{alignat}

Reliability Evaluation Based On Operational Testing

The method of operational testing is used to test the reliability of software. Here one checks how the software works in its relevant operational environment. The main problem with this type of evaluation is constructing such an operational environment. Such type of simulation is observed in some industries like nuclear industries, in aircraft etc. Predicting future reliability is a part of reliability evaluation.

There are two techniques used for this:

Steady State Reliability Estimation 

In this case, we use feedback from delivered software products. Depending on those results, we can predict the future reliability for the next version of product. This is similar to sample testing for physical products.

Reliability growth based prediction 

This method uses documentation of the testing procedure. For example, consider developed software and that we are creating different new versions of that software. We consider data on the testing of each version and based on the observed trend, we predict the reliability of the new version of software.

Reliability growth assessment and prediction

In the assessment and prediction of software reliability, we use the reliability growth model. During operation of the software, any data about its failure is stored in statistical form and is given as input to the reliability growth model. Using this data, the reliability growth model can evaluate the reliability of software. Lots of data about reliability growth model is available with probability models claiming to represent failure process. But there is no model which is best suited for all conditions. Therefore we must choose a model based on the appropriate conditions.

Reliability estimation based on failure-free working

In this case, the reliability of the software is estimated with assumptions like the following:

If a bug is found, then it is sure that it is going to be fixed by someone.

Fixing the big will not have any effect on the reliability of the software.

Each fix in the software is accurate.

+++++++++++++++++++++++++++++++++++++++++++++++

Conclusion



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now