Determining Effectiveness Of Code Through Software Metrics

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Testing of any software is hard work, but it is essential to find the faults from the newly developed software to determine the effectiveness of code, which will increase the quality of any code. Physical task of performing the tests can take a great deal of time and efforts. Testing improves the quality of code. Manual testing is not a perfect and sufficient to determine the effectiveness of code. We use different metrics to improve the effectiveness of code. Because of this technique manual inspection burden is minimised. We can measure the number faults in the specification, design, code and test plan and trace them back to their root causes.

INTRODUCTION

Software development is not an easy task. By carefully recording faults, failures and changes as they occur, we can measure the software quality, enabling us to compare different products, predicts the effects of changes, assess the effects of new practices, and set targets for process and product improvement. Modifying existing code, correcting defects, and otherwise evolving software are major parts of maintenance [1], which is reported to consume up to 90% of the total cost of software projects [2]. It is difficult to find out any software without bugs. Our aim to avoid the bugs from the software but it is not possible so we can use technique to detect the bugs and correct them. Writing the code is the initial footstep of creating a software system. Maximum cost of a software project is committed to modifying existing code, detecting and correcting errors, and generally evolving a code base. By using implicit language based specification we can reduce the cost of specification writing[4]. Incomplete documentation is one of the factor for maintenance difficulty. Maximum time span is spent for the studying existing software and understand the correct behaviour. Formal specifications are difficult for humans to construct [3], and incorrect specifications are difficult for humans to debug and modify [4]. Specification mining projects attempt to address these problems by inferring specifications from program source code or execution traces [5,6]. Bugs are more insidious than ever we expect them to be. Yet it is inconvenient to categorize them: initialization, call sequence, wrong variables and so on. A bad specification may lead us to mistakes good behavior for bugs and vice versa. There is no universally correct way to categorize bugs. Bugs are difficult to categorize. A bug which is categorize can be put into one or a further category depending on its narration and the programmer’s state of mind. In our method tool finds the different factors in program code and present the results by comparison.

RELATED WORK

Previously, we identified several of the many automated testing tools. Our work is specifically related to specification miming, software quality metrics and object oriented metrics for cyclomatic complexity. The literature on specification mining is varied, but many tools and techniques share similar characterization of properties. Specification mining is the task of constructing a formal specification from examples of programs behavior, or analysis of its source code [6] or both [20]. Specification miners are mostly dependent on input traces. Some previous specification miners having a set of source-level features related to software engineering processes that capture the dependability of code for specification mining. We analyze the relative analytical power of each of these features. Experimental evidence that our notions of trustworthy code serve as a basis for evaluating the dependability of traces. We provide a characterization for such traces and show that off- the-shelf specification miners can learn just as many specifications. If we use implicit language-based specifications or to reuse standard library specifications then it can reduce the cost of writing specifications[17]. These specification mining techniques take programs (and possibly dynamic traces, or other hints) as input and produce candidate specifications as output. Basically specifications could also be used for documenting, refactoring, debugging, testing, maintaining and optimizing a program. Centre of attention is that finding and evaluating specifications in a particular context: given a program and a generic verification tool, what specification mining technique should be used to find bugs in the program and thereby improve software quality. Thus we are concerned both with the number of "real" and "copied positive" specifications produced by the miner and with the number of "real" and "copied positive" bugs found using those "real" specifications.

Various software quality metrics are used to determine the effectiveness of code. Software quality metrics help us to find out various factors for particular code, i.e. software size, productivity, originality, quality. Complexity metrics estimate the amount of decision logic in a piece of software, and remains in industrial use to measure a code quality and improve limit of complexity [15]. Object oriented metrics are also used to process improvement in the software development. Process improvement is the ability to measure the process. Software metrics is idioms that embrace many activities which are contain the some degree of software measurement: data collection, quality model and measure, reliability model, performance evaluation and model, structural and complexity metrics, evaluation of methods and tools.

ACTUAL WORK DONE

Quality is really a composite of many characteristics, the notion of quality is usually in model that depicts the composite characteristics and their relationship. In many organizations, software testing is carried out manually. After the product reaches the mature stage, the test team generate some test cases and manually tests each and every feature. If defect is found, the software is modified. Again using the test cases, the software is tested. Such a manual testing is not advisable because manual testing is very time consuming, the same set of operations need to be done repeatedly. Even to manage the testing process is complicated as the testing has to be planned, bugs have to be tracked and reliability has to be performed. All these drawbacks can be overcome if the testing process is automated. Automated test tools help in managing the testing process effectively, helps to reduce manual testing to a large extent and the testing can be done automatically. Once the software is ready for testing, the functionality of the software can be tested repeatedly to improve the quality and reliability. In proposed method, our aim to develop a system which can be used to determine the quality of the code considering different aspects affecting the quality of the code. The term quality of the code can be explained using different factors such as genetic copy of code, position of the author, mixing of code, readability of code, path feasibility and many other factors which are used to measure the complexity of code. In our technique we present a new specification miner that works in various steps. First of all our tool statically estimates the quality of source code fragments, after that it assuming those quality judgments to traces all code visited along a implication. Finally, it weights each trace by its quality when counting event frequencies for specification mining. Our system develops an automatic specification miner that balances true positives as required behaviours with false positives non-required behaviours. We say that one important reason that previous miners have high false positive rates is that they falsely assume that all code is equally likely to be acceptable. For example, if code is generated by an inexperienced developer this is copied and pasted code and rarely executed piece of code, that particular code is considered as an execution trace through recently modified code. We believe that such type of a mark out are a poor guide to correct behaviour, especially when compared with a well tested, commonly-executed and stable piece of code. Patterns of specification adherence may also be useful to a miner, a candidate that is violated in the high quality code but adhered to in the low quality code is less likely to represent required behaviour than one that is adhered to on the high quality code but violated in the low quality code. We assert that a combination of lightweight, automatically collected quality metrics over source code can usefully provide both positive and negative feedback to a miner attempting to distinguish between true and false specification candidates.

Code quality information may be gathered either from the source code itself or from related artifacts, such as version control history. By augmenting the trace language to include information from the software engineering process, we can evaluate the quality of every piece of information supporting a candidate specification (traces that adhere to a candidate as well as those that violate it and both high and low quality code) on which it is followed and more accurately evaluate the likelihood that it is valid. We are developing a testing tool which is used to test the programme code as well as source code at the some extents. Generally source code testing tools are different for every language but we are trying to develop the testing tool that is useful for multiple programming languages. This tool can be used to checking whether coding guidelines are being followed or not by the developer. Use of test tool is beneficial as well as risky for the developer, i.e. test tool need to be thought of as long term investment that need maintenance to provide long term benefits. The main advantage of using test tools is the automating the system, the maximum effort and amount of period used up performing schedule, unexciting things, monotonous task is significantly reduced. Also test tools provide the more predictable and consistent results as human failing. Most of the associated with test tool are concerned with over positive expectations about what tool be capable of do and the not have of gratitude of the effort necessary to implement and achieve the benefit that the tool can bring. The system architecture of the system is as in following figure, which explains the modules to be generated.

Explanation of system

Our system for determining effectiveness of code

through software metrics uses the various stages:

Accept input in the form of computer program code.

Perform input purification.

Check for error occurrence in the code.

Check for the quality specification regarding the

given code.

Specify the rank for the different condition, using

calculated result.

Generate output in the form of quality report

FEASIBILITY STUDY:

1. Technical feasibility:

Technical feasibility deals with the study of function, performance and constraints like resources, availability, technology, development risk that may affect the ability to achieve an acceptable system.

It defines whether the work can be done, whether technology used is compatible or not with current system. Since the web based service will be developed using ASP.net, it is platform independent.

The technical issues investigated during the study are as under:-

The technology for implementation of the project system is readily available.

The system is capable of expansion.

The proposed system provides adequate accuracy, reliability and data security.

Here we have used .net technology, that is feasible for implementation of the project and in this technology we can make expansion as per future scope of the project. The proposed system has adequate accuracy & is securable.

2. Economic feasibility:

One of the factors, which affect the development of a new system, is the cost it would incur. The existing resources available are sufficient for implementing the proposed system and hence some cost has to be incurred to upload the service on the server.

3. Behavioral feasibility:

The web based service is found to be

Efficient

Time saving

Accurate

Secure and reliable

4. Operational feasibility:

There is no difficulty in implementing the web based service, if the user has the knowledge in internal working of the service. Therefore, it is assumed that he will not face any problem in running the service. The main problem faced during development if a new web based service is getting acceptance from the users.

CRITERIA FOR METRIC EVALUATION

Software metrics should posses to enlarge their usefulness according to the several researchers. Basili and Reiter suggest that metrics should be aware to externally noticeable disparity in the development environment. They also correspond to a intuitive notions about the characteristics difference between the software artifacts being measured [19].

The majority of recommended properties are quantitative in nature and as a result, most proposals for metrics have tended to be informal in their evaluation in metrics.

Reliable with the desire to move metric research into a more precise footing, it is advantageous to have a formal set of criteria with which to evaluate proposed metrics. More recently, Weyunker has developed a formal list of desiderata for software metric and has evaluated a number of existing software metrics using these properties. These desiderata notion of monotonicity, interaction, noncoarseness, nonuniqueness, and permutation.

TESTING METHODOLOGY

The complexity measure is designed to conform to our intuitive notion of complexity and since we often spend as much as 50 percent of our time in test and debug mode the measure should correlate closely with the amount of work required to test a program. In this section the relationship between testing and cyclomatic complexity will be defined and a testing methodology will be developed.

Let us assume that a program p has been written, its complexity v has been calculated, and the number of paths tested is ac (actual complexity). If ac is less than v then one of the following conditions must be true :

1) there is more testing to be done

2) the program flow graph can be reduced in complexity by

v-ac and

3) portions of the program can be reduced to in line code

Up to this point the complexity issue has been considered

purely in terms of the structure of the control flow. This

testing issue, however, is closely related to the data flow because it is the data behavior that either precludes or makes

realizable the execution of any particular control path. A

ADVANTAGES

1. This technique evaluate every line as per its importance not by its simple existence as part of the software code.

2. Measure quality of code by considering the significance, impact, importance.

CONCLUSION

Formal specifications have numerous applications, from testing, optimizing, refactoring, maintenance and documenting, to debugging and repair. However these formal specifications are difficult to produce manually. Our aim to produce better testing technique that finds defects in a program code. Specification mining for the finding bugs becomes more important, as specification becomes the restrictive factor in verification efforts. These specifications guide us to find out previously unknown bugs in large real world system. We found that this method is well-organized, both in provisions of machine resources and human resources. The result produced by our tool is understandable by human being easily, and also as accurate as what was generated by humans manually.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now