The Debugging Stone Age

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Software Reliability is the possibility of failure-free software process for a specified period of time in a specified environment. Software Reliability is also an key factor affect system reliability. This differs from hardware reliability in that it reflect the design precision, rather than developed perfection. The high density of software is the major contributing factor of Software Reliability difficulty. Software Reliability is not a function of time - although researchers have come up to up with models connecting the two. The modeling technique for Software Reliability is reaching its success, but before using the technique, the suitable model are carefully selected that can best suit our case. Measurement in software is standstill in its immaturity. No good quantitative methods have been developed to signify Software Reliability without extreme limitations. Various approaches can be used to improve the reliability of software, though, it is tough to balance development time and resources with software reliability.

The past 30 years have seen the formulation of numerous software reliability increase models to predict the reliability and error content of software systems. These models are concerned with forecasting future system operability from the crash data collected during the testing phase of a software product. A surplus of reliability models have appeared in the literature, however, an broad validation of these models seems to be lacking. The exactness of these models when validated using the very few available data sets varies significantly and thus even though the continuation of numerous models, none of them can be recommended unconditionally to potential users.

This thesis presents a Log-logistic software reliability growth model, the development of which was primarily provoked due to the inadequacy of the existing models to describe the nature of failure process underlying some of the previously reported as well as new data sets. The layout of the thesis is as follows: Section-2 describes the limited failure NHPP class of software reliability expansion models, and offers a new decomposition of the mean value function of the finite failure NHPP models, which enables us to attribute the nature of the failure strength of the software to the vulnerability function. Section-3 describes some of the limitations of the presented finite failure NHPP models. Section-4 describes the log-logistic software reliability growth model. Section-5 discusses parameter evaluation of the existing finite failure NHPP models as well as the log-logistic model based on the times between failures data. Section-6 describes the techniques used for software failure data analysis. Section-7 presents the analysis of two failure data sets which led us to the expansion of the log-logistic model.

The conventional way of predicting software reliability has since the 1970ies been the utilize of software reliability expansion models. They were developed in a time when software was developed using a waterfall process model. This is in line with the fact that most software reliability expansion models involve a significant amount of failure data to get any responsible estimate of the reliability. Software reliability growth models are normally described in the form of an equation with a number of parameters that need to be fixed to the failure data. A key problem is that the curve proper often means that the parameters can only be estimated very late in testing and hence their developed value for decision-making is restricted.

This is mainly the case when development is done, for instance, using an incremental approach or other short turnaround approach. A adequate quantity of failure data is merely not presented.

The software reliability growth models have primarily been developed for a quite different condition than today. Thus, it is not a astonish that they are not really fit for the challenges today if not the problems can be circumvented. This thesis addresses some of the potential of addressing the problems with software reliability growth models by look at ways of estimate the parameters in software reliability growth models before entering integration or system testing.

A study of debugging technologies reveal an interesting movement. Most debugging innovations have centered around dropping the dependence on human abilities and communication. Debugging technology has developed through a number of stages:

Phase-1: The Debugging Stone-Age

At the dawn of the computer era it was hard for programmers to persuade computers to produce output about the programs they ran. Programmers were mandatory to discover different ways to obtain information about the programs they used. They not only had to fix the errors, but they also had to build the tools to find the error. Devices such as scopes and program-controlled bulbs were used as an early method of debugging.

Phase-2: The Bronze-Age: Print-statement times

In due course, programmers began to identify bugs by putting print instructions inside programmers programs. By doing this, programmers were able to mark out the program path and the standards of key variables. The use of print-statements free programmers from the time-consuming task of build their own debugging tools. This procedure is still in general use and is actually well-suited to certain kinds of problems.

Phase-3: The Middle Ages: Runtime Debuggers

Although print-statements were an development in debugging techniques, they still required a significant amount of programmer time and effort. What programmers required was a tool that could complete one instruction of a program at a time, and produce values of any variable in the program. This would free the programmer from having to make a decision ahead of time where to put print-statements, since it would be done as he stepped from beginning to end the program. Thus, runtime debuggers were born. In theory, a runtime debugger is nothing more than an automatic print-statement. That allows the programmer to trace the program conduit and the variables without having to put print- statements in the code.

Today, virtually every compiler on the market comes with a runtime debugger. The debugger is implemented as a control passed to the compiler throughout compilation of the program. Very often this switch is called the "-g" switch. The switch tells the compiler to build adequate information into the executable to enable it to run with the runtime debugger.

The runtime debugger was a gigantic improvement over print statements, because it permissible the programmer to compile and run with a single compilation, moderately than modifying the source and re-compiling as he tried to narrow down the error.

Phase-4: The Present Day: Automatic Debuggers

Runtime debuggers made it easier to identify errors in the program, but they unsuccessful to find the cause of the errors. The programmer needed a improved tool to locate and correct the software fault.

Software developers discovered that some classes of errors, such as memory deception and memory leaks, could be found automatically. This was a step onward for debugging techniques, because it automated the process of finding the error. The tool would inform the developer of the error, and his job was to simply stick it.

Automatic debuggers come in numerous varieties. The simplest ones are just a library of functions that can be linked into a program. While the program executes and these functions are called, the debugger checks for memory fraud. If it finds this situation, that reports it. The limitation of such a tool is its inability to detect the point in the program where the memory fraud really occurs. This happens because the debugger does not observe every instruction that the program executes, and is only capable to identify a small number of errors.

The next group of runtime debuggers is based on OCI technology. These tools read the object code produced by compilers, and before programs are linked, they are instrumented. The basic attitude of these tools is that they look for central processing unit instructions that are used to access memory. In the object code, any instruction that accesses memory is modified to check for fraud. Those tools are more useful that the ones based on library techniques, but they are still not just right. Because these tools are triggered by memory instructions, they can only identify errors related to memory. Those tools can detect errors in dynamic memory, but they have limited detection capability on the stack and they do not work on static memory. They cannot distinguish any other kind of errors, because of the weaknesses in OCI technology. At the object level, a lot of important information about the source code is permanently lost and cannot be used to help place errors. Another drawback of these tools is that they cannot detect when memory leaks occur. Pointers and integers are not apparent at the object level, making the cause of the leak untraceable.

The third group of runtime debuggers is based on SCI technology. The tool reads the source code of the program, analyzes it, and instruments so that every program training is sandwiched between the tool's instructions. Because the tool reads the source code, it can discover errors related to memory and other large classes of errors. Moreover, for memory fraud errors, the tool is proficient to detect errors in all memory segments including heap, static, and stack memory. The big improvement of this tool is that it can roadway pointers inside programs, and leaks can be traced to point where they occurred. This production of tools is regularly evolving. In addition to looking for memory errors, these tools are able to perceive language specific errors and algorithmic errors. These tools will be the source for the next step of technological improvement.

All of the present tools have one general drawback. They still need the programmer to go through the extra step of looking for runtime errors after the program is compiled. In a sense, the process hasn't altered much since the Debugging Stone-Age. First, you write code, and then you check for errors. This two stage process still exists, only at a higher level. The procedure needs to be integrated into one stage.

Meaning of software Reliability

According to ANSI, Software Reliability is defined as: the probability of failure-free software procedure for a specified period of time in a particular environment.

Even though Software Reliability is defined as a probabilistic function, and comes with the impression of time, it must be noted that, different from conventional Hardware Reliability, Software Reliability is not a straight function of time. Electronic and mechanical parts may happen to "old" and wear out with time and usage, but software will not tarnish or wear-out throughout its life cycle. Software will not change over time unless purposely changed or upgraded.

Software Reliability is an important to quality of software quality, together with functionality, usability, presentation, serviceability, potential, maintainability, and citations. Software Reliability is hard to complete, because the complication of software tends to be high. While any understanding with a high degree of impediment, including software, will be tough to reach a firm level of reliability, system developers be liable to push convolution into the software layer, with the swift growth of system size and ease of doing so by advancement the software. For instance, large next-generation aircraft will have over one million source lines of software on-board; next-generation air traffic control systems will include between one and two million lines; the forthcoming international Space Station will have over two million lines on-board and over ten million lines of ground maintain software; several major life-critical defense systems will have over five million source lines of software. While the difficulty of software is inversely allied to software reliability, it is frankly correlated to other important factors in software quality, predominantly functionality, potential, etc. Emphasizing these features will have a propensity to add more convolution to software.

Software failure mechanisms

Software failures may be due to errors, ambiguities, oversights or misapprehension of the measurement that the software is invented to convince, recklessness or incompetence in writing code, insufficient testing, incorrect or unexpected usage of the software or other unanticipated problems. While it is appealing to draw an correspondence between Software Reliability and Hardware Reliability, software and hardware have basic differences that make them different in failure mechanisms. Hardware faults are mostly physical faults, when software faults are design faults, which are harder to envisage, catalog, discover, and accurate. Design faults are closely related to fuzzy human factors and the design process and do not have a solid understanding. In hardware, design faults may also occur, but physical faults usually lead. In software, finding a strict corresponding counterpart for "manufacturing" as hardware developed process is hard. A simple action of uploading software modules into place cannot be counted. Therefore, the quality of software will not change once it is uploaded into the storage space and start running. Trying to achieve higher reliability by simply duplicating the same software modules will not work, because design faults cannot be covered off by voting.

A limited list of the distinct characteristics of software compared to hardware is listed below:

Failure cause: Software defects are mainly design defects.

Wear-out: Software does not have energy related wear-out phase. Errors can occur without caution.

Repairable system concept: sporadic restarts can help to secure software problems.

Time dependency and life cycle: Software reliability is not a purpose of operational time.

Environmental factors: Do not influence Software reliability, except it might affect program inputs.

Reliability prediction: Software reliability cannot be predicted from any substantial basis, since it depends wholly on human factors in design.

Redundancy: Cannot recover Software reliability if matching software workings are used.

Interfaces: Software interfaces are purely theoretical other than illustration.

Failure rate motivators: Usually not predictable from analyses of disconnect statements.

Built with standard components: Well-structured and extensively-tested standard parts will help improve maintainability and reliability. But in software industry, this movement has not been experimental. Code reuse has been around for some time, but to a very limited scope. Firmly speaking there are no standard parts for software, apart from some consistent logic structures.

The bathtub curve for Software Reliability

Over time, hardware exhibits the failure characteristics shown in Figure 1, recognized as the bathtub curve. Period A, B and C stands for burn-in phase, useful life phase and end-of-life chapter. A detailed conversation about the curve can be originate in the theme Traditional Reliability.

Software reliability, however, does not show the similar characteristics analogous as hardware.

There are two main differences among hardware and software curves. One variation is that in the last segment, software does not have an growing failure rate as hardware do. In this stage, software is impending obsolescence; there is no enthusiasm for any upgrades or changes to the software. Therefore, the collapse rate will not modify. The second variation is that in the useful-life phase, software will experience a radical increase in failure rate each time an promote is completed. The failure rate levels off steadily, partly because of the defects found and unchanging after the upgrades.

The upgrades in Figure-2 involve feature upgrades, not upgrades for reliability. For characteristic upgrades, the complication of software is likely to be increased, since the functionality of software is improved. Even bug fixes may be a motivation for more software failures, if the bug fix induces other defects into software. For consistency upgrades, it is achievable to acquire a plummet in software failure rate, if the goal of the improve is enhancing software reliability, such as a restore or reimplementation of some modules using improved engineering approaches, such as clean-room technique.

A confirmation can be found in the result from Ballista project, robustness testing of off-the-shelf software Components. Figure 3 shows the testing results of fifteen POSIX compliant operating systems. Since software robustness is one aspect of software reliability, this result indicates that the improvement of those systems shown in Figure 3 should have built-in reliability upgrades.

Available tools, techniques, and metrics

Since Software Reliability is one of the most important aspects of software quality, Reliability Engineering approach are experienced in software meadow as well. Software Reliability Engineering (SRE) is the quantitative study of the operational behavior of software-based systems with respect to user necessities regarding reliability.

Software Reliability Models

A propagation of software reliability models have emerge as people attempt to recognize the uniqueness of how and why software fails, and try to enumerate software reliability. Over 200 models have been developed since the early 1970s, but how to measure software reliability still remains mostly unanswered. As many models as there are and many more budding, none of the models can confine a satisfying amount of the complication of software; constraints and assumptions have to be made for the quantifying procedure. Therefore, there is no single model that can be used in all situations. No model is comprehensive or even representative. One model may work well for a set of definite software, but may be entirely off track for other kinds of troubles.

The majority of software models include the following parts: assumptions, factors, and a arithmetic function that relates the reliability with the factors. The arithmetical function is usually higher order exponential or logarithmic.

Software modeling techniques can be divided into two subcategories: prediction modeling and inference modeling. Both kinds of modeling approaches are based on observing and accumulating failure data and analyzing with statistical conclusion.

The major differentiation of the two models is exposed in the subsequent table:

Issues

Prediction models

Estimation models

Data reference

Uses chronological data

Uses data from the current software enlargement effort

When used in development cycle

Usually made prior to increase or test phases; can be used as early as concept phase

Usually made later in life cycle(after some data have been collected); not normally used in concept or development phases

Time frame

Predict reliability at some future time

Estimate reliability at either present or some future time

Difference between prediction models and estimation models

Representative prediction models include Musa's Execution Time Model, Putnam's Model and Rome Laboratory models TR-92-51 and TR-92-15, etc. Using prediction models, software reliability can be predicted early in the development phase and enhancements can be initiated to get better the reliability.

Representative estimation models include exponential distribution models, Weibull distribution model, Thompson and Chelson's model, etc. Exponential models and Weibull distribution model are frequently named as traditional fault count/fault rate assessment models, whereas Thompson and Chelson's model belong to Bayesian fault rate estimation models.

The field has complete to the point that software models can be functional in practical situations and give significant results and, second, that there is no one model that is greatest in all situations. Because of the involvedness of software, any model has to have extra assumption. Only inadequate factors can be put into deliberation. Most software reliability models ignore the software development procedure and focal point on the results the pragmatic faults and/or failures. By doing so, difficulty is reduced and abstraction is achieved, however, the models tend to concentrate to be applied to only a fraction of the situations and a certain class of the troubles. The right models that outfit specific cases have been selected. In addition, the modeling results cannot be blindly whispered and applied.

Reliability analysis in the post development phase

In most software systems, a comparatively small subset of software modules contains a unequal number of faults. This suggests that the unpredictability of the software can be largely qualified to a small number of models. Identification of the modules which are likely to supply significantly to the untrustworthiness of the software can help channel testing and verification efforts towards these modules, and improve the reliability of the software in a cost–effective method. Reliability analysis in the post expansion phase thus involves Identification of the "fault–prone" modules based on software metrics.

Software metrics represent descriptions of design and code attributes as well as the development process. Software metrics can be approximately classier into two categories, specifically, invention metrics and development metrics. Product metrics such as the number of lines of code, number of conditional jumps, cyclamate complexity metric, Halstead’s estimation of program length metric, and Jensen’s estimation of program length metric measure code individuality. Product metrics may also include plan metrics to measure the individuality of the design of the request. Process metrics measure the characteristics of the software development process and include reuse, history of corrected faults, and experience of programmers. Several research efforts have required to develop a predictive affiliation between software metrics and the "reliability" of a software module. These techniques can be roughly classier as classification techniques and fault prediction techniques. Classification techniques such as Discriminant analysis, factor analysis, and classification trees classify the modules into two classes, namely fault– prone and none fault–prone. The reliability (unreliability) of the module thus depends on whether it belongs to the fault–prone (non fault–prone class). Fault prediction techniques such as linear and non linear regression, Alberg diagrams and Poisson analysis search for to predict the number of faults in a software module, which is a meter of its reliability. Predicting the number of faults in a software module as conflicting to classifying the modules into fault–prone and non fault–prone categories can enable us to:

Assess the impact of design changes during the design phase,

Estimate test plans by comparing the test coverage planned with the predicted fault sharing,

Identify the components that are at risk of being under-tested,

Identify additional test cases by comparing the actual number of faults revealed and the predicted faults, and make decisions making about when to stop testing, when the actual number of faults come up to the predicted numbers. In this section a brief explanation of the Regression tree analysis technique to predict the number of faults in software modules and present a data analysis using the regression tree approach has been prepared.

Accurate Software Reliability Estimation

A large number of software reliability growth models are now obtainable. It is broadly known that none of these models performs well in all situations, and that choose the proper model apriori is difficult. For this reason current work has paying attention on how these models can be made more precise, rather than trying to find a model which works in all cases. This includes a range of hard work at data filtering and recalibration, and an examination of the physical elucidation of model parameters. The impact of the parameter assessment technique on model accurateness, and show that the maximum likelihood method provides for estimates which are more reliable than the least squares method have been examined. An explanation of the parameters for the accepted logarithmic model has been existing which shows that it may be possible to use this elucidation to overcome some of the difficulties found in working with early failure test data. A new software reliability model, based on the goal measure of program treatment is accessible to show how it can be used to predict the number of defects in a program.

Increasingly software shows a critical component in not only scientific and business related enterprises, but in daily life where it runs devices such as cars, phones, and television sets. Even though advances have been made towards the invention of defect free software, any software required to activate consistently must still endure extensive testing and debugging. This can be a costly and time overwhelming process, and managers require accurate information about how software reliability grows as a result of this process in order to efficiently manage their budgets and projects.

The effects of this procedure, by which it is hoped software is made more reliable, can be modeled during the use of Software Reliability Growth Models, hereafter referred to as SRGMs. Perfectly, these models provide a means of characterizing the expansion process and enable software reliability practitioners to make predictions about the probable future reliability of software under development. Such techniques allow managers to exactly allocate time, money, and human resources to a project, and assess when a piece of software has reached a point where it can be unconfined with some level of confidence in its reliability. Unfortunately, these models are repeatedly erroneous.

Numerous SRGMs have been projected, and some appear to be improved overall than others. Unfortunately, models that are good overall are not always the best choice for a particular data set, and it is not promising to know which model to use a priori. Even when an correct model is used, the predictions made by a model may still be less accurate than desired. For this reason, a great transaction of research has gone into annoying to make more effective use of existing models. A variety of methods have been planned, such as adjusting for model bias, combining manifold models, and smoothing of initial data. Li and Malaiya found that the choice of development techniques frequently makes a bigger dissimilarity in model exactness than the early option of model used.

This study will scrutinize how existing software reliability growth models can be used more exactly.

It will address wholes in the existing literature, both in terms of model use and meaning, and advise some techniques which might make software reliability models more precise. In this chapter a plan the basis for the work, models and data used, and techniques used to evaluate the model usefulness has been made.

Software Reliability Metrics

Measurement is routine in other engineering field, but not in software engineering. Although provoking, the quest of quantifying software reliability has by no means ceased. Until now, there is no good way of measuring software reliability.

Measuring software reliability leftovers a difficult problem because there is no good accepting of the nature of software. There is no clear description to what aspects are related to software reliability. A suitable way to measure software reliability cannot be found, and most of the aspects related to software reliability. Even the most understandable product metrics such as software size have not uniform definition.

It is tempting to measure something related to reliability to replicate the characteristics, if reliability cannot be measured straight. The current practices of software reliability measurement can be divided into four categories:

Product metrics

Software size is thought to be philosophical of complexity, development effort and reliability. Lines of Code (LOC), or LOC in thousands (KLOC), is an spontaneous initial approach to measuring software size. But there is not a normal way of counting. Typically, source code is used (SLOC, KSLOC) and comments and other non-executable statements are not counted. This method can not loyally compare software not written in the same language. The initiation of new technologies of code reuses and code production technique also cast disbelief on this simple method.

Function point metric is a method of measuring the functionality of a proposed software development based upon a count of inputs, outputs, master files, inquires, and interfaces. The method can be used to guess the size of a software system as soon as these functions can be recognized. It is a measure of the functional convolution of the program. That measures the functionality delivered to the user and is autonomous of the programming language. It is used mainly for business systems it is not proven in scientific or real-time applications. Complexity is straightforwardly related to software reliability, so representing complexity is essential. Complexity-oriented metrics is a method of influential the complexity of a program’s control structure, by simplifies the code into a graphical representation. Representative metric is McCabe's Complexity Metric. Test coverage metrics are a way of expecting fault and reliability by performing tests on software products, based on the statement that software reliability is a function of the portion of software that has been fruitfully verified or tested. Detailed discussion about various software testing methods can be found in topic Software Testing.

Project management metrics

Researchers have realized that good management can result in better products. Research has demonstrated that a relationship exists between the development process and the ability to whole projects on time and within the preferred quality objectives. Costs enlarge when developers use insufficient processes. Higher reliability can be achieved by using better development process, risk management process, configuration management process, etc.

Process metrics

Based on the assumption that the quality of the product is a direct function of the process, process metrics can be used to approximate, monitor and improve the reliability and quality of software. ISO-9000 certification, or "quality management standards", is the generic reference for a family of standards developed by the International Standards Organization (ISO).

Fault and failure metrics

The goal of collecting fault and failure metrics is to be able to decide when the software is approaching failure-free execution. Simply, both the number of faults found during testing (i.e., before delivery) and the failures (or other problems) reported by users after delivery are collected, summarized and analyzed to achieve this goal. Test strategy is highly relative to the efficiency of fault metrics, because if the testing situation does not cover the full functionality of the software, the software may pass all tests and yet be prone to failure once delivered. Usually, failure metrics are based upon customer information regarding failures found after release of the software. The failure data collected is therefore used to calculate failure concentration, Mean Time between Failures (MTBF) or other parameters to compute or predict software reliability.

Software Reliability Improvement Techniques

Good engineering methods can mostly develop software reliability. Before the deployment of software products, testing, verification and validation are necessary steps. Software testing is heavily used to trigger, locate and remove software defects. Software testing is still in its infant stage testing is crafted to suit specific needs in various software development projects in an ad-hoc approach. Various analysis tools such as trend analysis, fault-tree analysis, Orthogonal Defect classification and formal methods, etc, can also be used to minimize the possibility of defect occurrence after release and therefore improve software reliability.

After deployment of the software product, field data can be identified and analyzed to study the behavior of software defects. Fault tolerance or fault/failure forecasting techniques will be helpful techniques and guide rules to minimize fault occurrence or impact of the fault on the system.

Methods to achieve reliable software

Reliable software systems can be achieved by the mutual utilization of a set of methods which can be classier into the following areas (notation adopted from. Fault-avoidance: These techniques undertaking to prevent the creation of software faults in the first place, by interactive refinement of the user’s system requirements, using good software design methods, encouraging structured programming discipline and writing clear code. Formal methods and software reuse have also gained popularity as fault-prevention techniques in the software development community.

Fault removal: Practitioners depend mainly on software testing techniques to remove the faults existing in the software. Formal inspection is another thorough process dealing with verdict and correcting the faults, and verifying the corrections.

Fault tolerance: Software fault tolerance is concerned with all the techniques essential to allow a system to tolerate software faults remaining in the system after its development. Design diversity is an established technique to provide tolerance to software faults.

Fault=failure forecasting: This involves establishing the association between a failure and the fault that causes the failure, development of reliability models, collection of failure data, application of the reliability models to the observed data, model selection through analysis and interpretation of the results, and predicting a range of metrics of interest like reliability, expected number of residual faults, failure intensity etc., to guide management decisions.

Software development phases

Software process models provide direction to conclude the order of the stages involved in software development and to set up transition criteria for progressing from one stage to the next. A number of software process models have been proposed, some of which include: the code-and-fix model, the stage wise model, the waterfall model, the evolutionary development model, the transform model and the spiral model.

The traditional waterfall development model, which has become the basis for most software acquisition principles, consists of the following sequence of phases: requirements specification, design and architecture, implementation/coding, testing and validation, and deployment and maintenance. It should be noted that the use of the waterfall model for software development process is not recommended. The waterfall model is included here merely to familiarize the reliability evaluation techniques about software development activities that are likely to occur in a software development process. The thesis presents techniques to evaluate the reliability of a software application during all phases of the software development life cycle, starting from the architecture design phase all the way into operation. It is planned for practitioners such as architects, designers and managers who architect and design software applications to achieve the desired reliability targets in a cost-effective manner.

Subsequent to deployment these techniques can be used to monitor and control the applications to aid in achieving rapid reliability growth. In the operational phase, these techniques can be used to assess and improve the reliability of the software application. The layout of the tutorial is as follows:

Section-2 describes reliability assessment techniques that can be employed in the architecture design phase of the life cycle. Section 3 describes reliability assessment techniques that can be employed\ after development prior to the testing phase. Section 4 describes the reliability assessment techniques that can be used in the testing phase. Section 5 describes reliability assessment techniques that can be applied in the operational phase of a software application. Following sections present a brief description of a Software Reliability Estimation and Prediction Tool (SREPT) that encapsulates all the techniques described in earlier sections in a unified framework and presents a failure data analysis example using SREPT.

Reliability analysis in the design phase

Architecture–based analysis which seeks to characterize the reliability of an application enchanting into consideration the failure characteristics of its "components" and the "architecture" of the application can be practical in the architecture=design phase of a software application. The notion of a "component" and "architecture" is well defined only in the case of applications that are assembled using reusable software components using one of the component models such as Microsoft’s DCOM/COM, OMG’s CORBA component model and Sun Microsystems’ JavaBeans and Enterprise JavaBeans. For applications that are built–from–scratch, or ground up, the notion of a "component" and "architecture" is not very well defined. However, even for applications that are built–from–scratch, it is frequent to use a "divide and conquer" strategy in order to master size and complexity. As a result, built–from–scratch applications are very likely to be made up of several interacting parts. The spirit of architecture–based analysis is to characterize the reliability of an application in terms of the failure characteristics of its parts and interactions among the parts. As a result, architecture–based analysis can be applied to both types of applications, namely, the ones built–from–scratch, and the ones assembled using a component–based approach. In this chapter the terms "component" and "architecture" are used in a generic sense with the former representing a part of an application and the latter representing interactions among the parts.

A number of analytical models have been proposed to characterize the reliability of an application based on its architecture and the failure characteristics of its components. These models use the control flow graph to represent software architecture and estimate software reliability analytically. They presume that the transfer of control among the components follows a Markov property. Software architecture can be modeled using a discrete time Markov chain (DTMC), continuous time Markov chain (CTMC) or a semi–Markov process (SMP). The failure characteristics of a component can be given by its reliability, constant failure rate or time–dependent failure rate.An comprehensive overview of various analytical models arising from combinations of architectural models and failure characteristics of the components is presented.

Approaches to achieve Software Reliability

One possible way to address the problem with software reliability growth models which require data can to make constant predictions is to guess the model parameters by other means. Three different approaches are listed below:

Historical data from previous similar situations, i.e. a software reliability growth model parameter value is used from a similar project or circumstances,

In-project estimation, i.e. parameters are estimated by means of information from the present project,

Combined approach, for example building a model for estimating a parameter from chronological data and then feeding the model with existing data.

As an example the Goel-Okumoto model can be mentioned that includes two parameters. The Goel-Okumoto model is a simple non-homogeneous Poisson process (NHPP) model with the following mean value function: μ (p) = a(1 − exp(−bp)).

Where a is the total number of failures predictable to be found as p goes towards perpetuity.

μ(p) is the expected number of found failures at time t. Finally, b is possible to view as a testing efficiency parameter. A higher value means that more failures will be originate per time unit.

When considering how to estimate the model parameters, a basic accepting and interpretation of the parameters is important. If relating the Goel-Okumoto model and its parameters to the three approaches above, it becomes clear that the total number of failures is highly reliant on the current project and therefore the first approach above is less suitable. On the other hand, the first approach may very well be suitable, if a similar test approach is applied, to estimate the test effectiveness parameter.

At this stage, it is also worth stressing that even if we are only competent of estimating one parameter, it makes the curve fitting considerably easier.

Going back to the number of failures expected to be found, two possible methods have been identified for estimation before integration and system testing: complexity metrics and capture-recapture estimations. Most work so far has been directed towards complexity metrics, although models based on them have also been criticized.

Complexity metrics require that a model is built from preceding projects or increments, and then fed with new input to produce an estimate of the total number of failures to be predictable.

Capture-recapture has mainly in software engineering been used in software inspections, but some attempts to apply in to testing have also been published. We are currently planning a study on using capture-recapture for typical components in the system together with an industrial joint partner. The idea is to recognize a typical component and have a number of testers testing the component to get an estimate of the remaining number of defects in the component.

We have chosen to use a typical component, since the company is not equipped to have several testers on each component only for estimation purposes. We are also looking into different ways of using this information for one typical component to scale the estimate to the whole system.

Summary

Software reliability growth models are in progress to be developed in a period where the waterfall model was king (or queen), but they are less functional in modern approaches to software development. Thus, we have either to discover completely new ways of capturing the information that is hidden in failure data or we have to familiarize yourself the custom of the software reliability growth models to current ways of developing software.

This position thesis has pointed to some opportunities when it comes to applying software reliability growth models. Or more purposely to different ways of estimating the software reliability growth model parameters without having to wait until the solution of one or more non-linear equations can be solved numerically with a steady solution.

Imperfect Software Debugging Models

An NHPP model is said to have perfect debugging assumption when a(P) is constant, i.e., no new faults are introduced during the debugging process. [16,17]An NHPP SRGM subject to flawed debugging was introduced by the authors with the postulation that if detected faults are detached, then there is a opportunity that new faults with a constant rate γ are introduced.

Let n(p) be the number of errors to be ultimately detected plus the number of new errors introduced to the system by time p, the subsequent system of differential equations are obtained:

--------------------- (2.1)

and , -------------------------- (2.2)

where and . Solving the above differential equations under the boundary conditions m(0) = 0 and W(0) = 0 , the following MVF of inflection S-shaped model with Log-logistic testing-effort under imperfect debugging can be obtained as:

---------------------- (2.3)

and

------------------- (2.4)

Thus, the failure intensity function λ(p) is given by

------------------ (2.5)

The projected number of remaining errors after testing time p is

---------------- (2.6)

Least Squares Method

The parameters, in the Log-logistic TEF can be estimated using the method of LSE. These parameters are indomitable for n experimental data pairs in the form ( , ) (k =1,2,....,n; 0 < < ... ) t where is the increasing testing-effort consumed in time (0, ). The estimators, , which contribute the model with a greater fitting, can be obtained by minimizing:

------- (2.7)

Differentiating S with respect to, , setting the partial derivatives to zero, the set of nonlinear equations are obtained respectively.

--- (2.8)

Thus, the LSE of is given by

--------------- (2.9)

The LSE of and can be obtained numerically by solving the subsequent equations:

------ (2.10)

and

------- (2.11)

Maximum Likelihood Method

Once the estimates of , are known, the parameters of the SRGMs can be predictable during MLE method. The estimators of a, b, and r are gritty for the n observed data pairs in the form in the form ( , ) (k =1,2,....,n; 0 < < ... ) t where is the increasing number of software errors detected up to time ). Then the likelihood function for the unknown parameters x, y, and s in the NHPP model [18] is given by (Musa et al., 1987).

----- (2.12)

Where and

The maximum probability estimates of SRGM parameters x, s and y can be obtained by solving the following three equations.

--------------- (2.13)

----------- (2.14)

+ ------------------------------- (2.15)

Where =, k=1,2,........n and

Solving the above equations with numerical methods gives the principles of x, y and s.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now