The Need To Measure Product Quality

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

1.1 Introduction

Computers and the Internet has revolutionised life in many ways enabling us to use, transform and share information. At the heart of this revolution is an intangible man-made thing we call software.

In the progress from Turing's machine to Von Neumann's machine, from machine languages to higher level programming languages and many computing paradigms, the only constants have been higher levels of abstraction, more complex systems, change and evolution of use and users.

Humans have written large volumes of software but, in truth, the total volume of source code produced is not definitively known. The volume of software data both in human readable and machine readable form poses a serious storage challenge, however that can be addressed by our ability to produce more capable storage media and storage solutions.

The problem that software engineers and computer scientists face today is not in creating innovative and complex software solutions, but ensuring that the software is dependable, that it is designed optimally and the processes are efficient and scalable.

The need to measure product quality, design quality, and process properties has resulted in a wide variety of metrics that cover structured programming, object oriented programming, UML design and project processes. A selection of these are used by the industry and academia. However there is a distinct lack of empirical evidence to support or refute the claims of benefits due to adoption, and this is compounded by the fact that, despite the research and industrial work, few widely accepted technical standards exist.

Software engineering is a human participation intensive activity, and software products and services are expensive to produce. Unlike other established fields of engineering where enforceable commercial legal structures regarding product quality and liability exist, such structures are yet to develop in the case of software engineering. A consequence of this is inadequate motivation for investment on the development of software testing infrastructures.

Software is the transparent enabler of many systems that directly or indirectly affects our daily life. We live not just in the age of information networks, but also software networks. For instance, consider the case of internet enabled online banking, that makes it possible for us to gain access to our bank accounts and associated services on our mobile computing devices or desktop computing devices, without having to walk into a bank. Recently a large UK based banking group was unable to provide these service as a result of a software malfunction, which left its customers without access to full banking services whether online or at the branch [1]. This has resulted in financial loss and disruption of normal operations spanning a few weeks.

Such failures are not exceptional, in fact software failures are more common than is acknowledged, representing economic losses to the tune of 50-60 Billion US Dollars annually according to a NIST report on this subject [2].

All man-made systems are prone to defects and failures. Are software systems prone to failures that are more severe? We do not know the definitive answer to that, however the impact of system failures is most severe on those systems which share a highly coupled relationship. If one were to hazard a guess, it is quite likely that software failures have a higher than normal impact on our daily lives, almost comparable to situations where loss of a public utility (like water, electricity or gas) may happen.

In the last two decades the prominent emergence of the Internet, new innovations in digital storage technology and further advances in computer processor technology has resulted in a huge growth in the volume of data being produced, processed and consumed. Software systems have evolved not only in size but in complexity of composition. Whilst software engineering tools, techniques and technology has evolved there is significant concern that the pace of this growth has not been sufficient to address the significant engineering challenges that emergent, large and ultra-large highly interconnected software systems present [3].

Real world natural, physical and man-made systems characterised by such size and complexity are difficult to understand, biochemical systems, socio-technical systems and the Internet are examples of such systems, to name a few. Research on various phenomena of these systems as a means to gain understanding and support reasoning has progressed along various parallel approaches with varying degrees of success. One of these approaches is to view these systems in light of complex systems theory which has been influenced by theory of general systems. The basic premise is that the dynamical behaviour of these systems is greater than the sum of its parts. The dichotomy is that general system approaches is characterised by holistic views rather than reductionist views to analysis. The later approach i.e. the reductionist approach, is predominant in science today, and seems intuitively to provide a better chance to succeed, because the technique followed is essentially to divide and conquer.

It is now appreciated that new approaches are needed to model real world systems more accurately, and one of these emergent approaches is Complex Networks modelling, which provides a skeletal framework to represent real world systems as a network graph of essential system objects and pair-wise interaction relationship amongst these objects. The skeletal framework can be used as the basis to analyse dynamic and non dynamic properties, by over laying this framework with temporal information, it is also possible to analyse evolutionary and growth properties too. Statistical physics of these network models enable researchers to answer such questions as; what the most important system objects are, how robust is the system being modelled to failure, or the conditions under which the modelled system is likely to lose functional integrity. The network graph itself is a mathematical model, and the reasoning is based on empirical models that need to be developed for systems that are sought to be modelled. This approach makes it possible to reason about both the global and local properties of system and provides an avenue to take a holistic or reductionist view as appropriate to support reasoning [4] .

Software engineering research has in recent times focused on tackling the challenges of software testing and testing solutions that are scalable and cost effective. Cloud services have generated significant industrial and academic research as well as commercial interest from information technology services industries.

The questions that one is naturally inclined to ask is what benefits can Cloud provide to testing, is it likely to have a radical impact on how software testing activity is done in the industry, and whether this service is suitable for empirical software engineering experiments that is carried out in academia? Whether or not cloud has a radical effect on software testing is difficult to predict. However the way we consume software services is changing beyond doubt, large software vendors have committed huge resources and made Cloud services the platform for future products and services.

The key questions driving software engineering research today concern software testing, validation of new techniques, and scalable testing solutions. In this work some of these questions have been investigated.

1.2 Aim , objectives and contributions

The primary aim of this thesis is to validate network analysis measures of network's structural element centralitities and explain how these measure may be use as metrics to discover functionally important methods of a system from typical usage scenario based execution trace of object oriented software system.

The objectives of this thesis are:

To define the intuitive yet abstract concept of functional importance in the context of our understanding of intra-systemic object interactions/collaborations in object oriented software systems.

To use network analysis to analyse network graph representation of intra-systemic object interactions/collaborations based on dynamic execution trace of the software to predict structurally important elements of the network.

To empirically verify that that the predicted structurally important elements are truly important by exploiting the notion that structural integrity and functional integrity are correlated, i.e. structurally important elements are also functionally important elements of the subject system.

Network analysis is currently used in social ,socio-technical, technological, and biological networks analysis. In many of these fields the identification of elements of importance is a necessary part of analysis and reasoning about dynamic phenomena of these systems. In many of these fields it is difficult to gain access to the data and to do even non-destructive testing to verify predicted importance. This is especially true in animal brain networks analysis, that tries to reason about brain process link axonal structure of brain to its normal and abnormal functional characteristics. Whilst current models of brain networks and assumptions about importance of structural-functional elements has been found to correlate with clinical observations, it is not possible to claim so because the reliability of network measures is not known and verification cannot be attempted in clinical or laboratory environments because it is just not practicable. In software engineering however data collection is relatively much simpler and quick, making it possible to not only use network analysis to predict importance of elements, but also simulate the notion of network perturbation by leveraging the notion of code mutation.

This thesis is envisaged to make three contributions:

Provide the first definition of functional importance in context of software engineering. In fact despite its wide use the concept has not been defined even for Biological system where it is widely used.

Empirically verify that in addition to other techniques network analysis may be used to discover functionally important elements in software system.

Research demonstrates a technique where software system is used as a surrogate of brain networks to verify that network analysis can truly predict structural-functional important elements.

1.3 Outline of the thesis

The thesis is organised into 7 chapters, including this chapter. In chapter two the discussion is on the literature review. In chapter 3 the discussion focuses on research questions and the conceptual underpinnings of the research questions. In chapter 4 we discuss the research methodology. Chapters 5-6 discusses the various empirical software engineering experiments conducted as part of this research. Chapter 7 is the concluding chapter in which results are discussed.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now