Image Based Vehicle Tracking And Classification

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract

In recent years, the development of automatic traffic surveillance system has received great attention in the academic and industrial research. Based on computer vision technology, the purpose of this work is to construct the traffic surveillance system on the highway for estimating traffic parameters, such as vehicle counting and classification.

The proposed system mainly consists of three steps including vehicle region extraction, vehicle tracking, and classification. The background subtraction method is firstly utilized to extract the foreground regions from the highway scene. Some geometric properties are applied to remove the false regions and shadow removal algorithm is used for obtaining more accurate segmentation results. After vehicle detection, a graph-based vehicle tracking method is used for building the correspondence between vehicles detected at different time instants.

CONTENTS

CHAPTER NO TITTLE PAGE NO

ACKNOWLEDGEMENT

SYNOPSIS

LIST OF FIGURES

LIST OF ABBREVIATIONS

1. INTRODUCTION

1.1 ORGANIZATION PROFILE

1.2 OVERVIEW OF PROJECT

2. SYSTEM STUDY

HARDWARE SPECIFICATION

SOFTWARE SPECIFICATION

EXISTING SYSTEM

PROPOSED SYSTEM

FEASIBILITY STUDY

SOFTWARE DESCRIPTION

3. SYSTEM DESIGN

3.1 MODULE DESCRIPTION

3.2 INPUT DESIGN

3.3 OUTPUT DESIGN

3.4 PROJECT FLOW DIAGRAM

3.5 MODULE DIAGRAM

3.6 UML DIAGRAMS

4. SYSTEM IMPLEMENTATION

4.1 SYSTEM IMPLEMENTATION

4.2 SYSTEM ARCHITECTURE

5. CODE REVIEW AND TESTING

5.1 CODE REVIEW

5.2 TESTING PROCESS

6. LITERATURE REVIEW

7. CONCLUSION

8. BIBLIOGRAPHY

9. SCOPE OF FUTURE ENHANCEMENT

APPENDIX

FORMS

CODE

INTRODUCTION

Vehicle detection is very important for civilian and military applications, such as highway monitoring, and the urban traffic planning. For the traffic management, vehicles detection is the critical step. Vehicles detection must be Vehicles detection could be achieved using the common magnetic loop detectors which are still used even though they are not the effective. Loop detectors are considered as point detectors and could not give the traffic information for the highway. Vision based techniques are more suitable than the magnetic loop sensors.

They do not disturb traffic while installed and they are easy to modify. Their applicability is more comprehensive because they could be used in many aspects as vehicle detection, counting, classification, tracking, and monitoring. One camera could be used to monitor large section of highway. In spite the apparent advantages of vision based methods there are still many challenges. These challenges are weather changes, sun light direction and intensity changes, building shadows, vehicles have different sizes, shapes and colors. In this paper one digital camera installed over the freeway to detect successive images.

Detected images could be analyzed to extract the background automatically. Each image contains background of the highway and the moving vehicles. It is difficult to get freeway image without moving vehicles (background), so the freeway background must be extracted from the sequence images. The extracted background is used in subsequent analysis to detect moving vehicles. Current approaches for vehicles detection try to overcome the environment changes. Some approaches achieve the detection using background subtraction only and predicting the background through the next update interval.

In these approaches the background is not extracted but detected and then updated through the next images processing. Intensity changes, stopped vehicles (or very slow moving vehicles) and camera moving lead to miss detection in these techniques. It is used to detect vehicle in simple scenes. Another approach uses edge-based techniques. In this approach 3D model is proposed for the vehicle. This 3D model depends on the edge detection of the vehicle. It is applicable under perfect conditions for passenger vehicles only. The edge in image processing is abrupt change in the intensity values.

The edge detection suffers from many difficulties such as vehicle shadows, dark colors and ambient lights. Edge detection process becomes more difficult when vehicle color is close to the freeway color. In other approaches such as probabilistic and statistical methods, there is not strict distribution for vehicle model so they use the better approximations for the unknown distributions, this leads to intensive computations and time consuming. The results of applied these methods will lead to high miss detection, and could not apply for complicated scenes. In these techniques, detection rate is low as compared to the other approaches. Other approaches use explicit detail model, where they need detail model and a hierarchy for detail levels.

In the model contains substructures like windshield, roof, hood, and radiometric features as color constancy between hood color and roof color (where the gray level is higher than the median of the histogram). It is apparent that a large number of models are needed to cover all types of vehicles. In a hierarchical model is used to decide on the detection step (that identifies and clusters the image pixels) which pixels have a strong probability to belong to vehicles. In this case a huge computation is needed to detect the vehicles, and this will result in miss detection for different shapes of vehicles.

In vehicle detection is implemented by calculating various characteristics features in the image of a monochrome camera. The detection process uses shadow and symmetry features of vehicle to generate vehicle hypothesis. This is beneficial for driver assistance but it is not applicable for vehicles counting and complicated scenes. In neural networks were used for vehicle detections. Neural networks have drawbacks; the main one is that there is not warranty that they reach the global minimum (in this case there are not closed-form solutions for modeling the vehicle detection).

The other one implies to learn a data set representative of the real world and there is not universal optimum model for neural network. In fuzzy measures are used to detect vehicles. The detection process depends on the light intensity value. When light intensity value falls in certain interval, fuzzy measures must be used to decide if it is a vehicle or not. When the intensity value is larger than this interval, it represents a vehicle and when it is less than this interval vehicle does not exist.

This approach suffers from environment light changes and interval determination that needed to apply the fuzzy measures. In this paper, background extraction and edge detection is used to detect vehicles. This is useful in two ways; the first is using the advantages of the background subtraction and edge detection to detect vehicles. Second one it is able to deal with complex scenes and treat the intensity hanging problems.

implemented at different environment where the light and the traffic status changing.

Objective:

In this project one digital camera installed over the freeway to detect successive images. Detected images could be analyzed to extract the background automatically. Each image contains background of the highway and the moving vehicles. It is difficult to get freeway image without moving vehicles (background), so the freeway background must be extracted from the sequence images. The extracted background is used in subsequent analysis to detect moving vehicles.

Problem Definition:

The road region represents an important part of the problem domain knowledge and can thus be useful in detecting traffic events. It consists of the set of road pixels from the camera image’s perspective and contains enter, exit, and rail regions. each vehicle moving along the analyzed road region is characterized by a unique identification number that’s assigned after it’s first detected as passing by any enter region. The state of a tracked vehicle st at frame t consists of its identification number, id, which is a consecutive number assigned after detecting the vehicle passing through an enter region; a Boolean flag ft indicating whether the vehicle’s current position is known at frame t; two spatial coordinates denoting the target’s center-of-mass position (xt and yt, respectively);two scalars representing the sides of the vehicle bounding box (Lxt and Lyt, respectively);and the corresponding velocity coordinates (vx,t and vy,t, respectively). Therefore, this vehicle state st at frame t is represented in Equation 4 by the eight-tuple

FEASIBILITY STUDY:

The feasibility of the project is analyzed in this phase and business proposal is put forth with a very general plan for the project and some cost estimates. During system analysis the feasibility study of the proposed system is to be carried out. This is to ensure that the proposed system is not a burden to the company. For feasibility analysis, some understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are,

 ECONOMICAL FEASIBILITY

 OPERATIONAL FEASIBILITY

 TECHNICAL FEASIBILITY

2.3.1 Economical Feasibility:

This study is carried out to check the economic impact that the system will have on the organization. The amount of fund that the company can pour into the research and development of the system is limited. The expenditures must be justified. Thus the developed system as well within the budget and this was achieved because most of the technologies used are freely available. Only the customized products had to be purchased.

2.3.2 Operational Feasibility:

The aspect of study is to check the level of acceptance of the system by the user. This includes the process of training the user to use the system efficiently. The user must not feel threatened by the system, instead must accept it as a necessity. The level of acceptance by the users solely depends on the methods that are employed to educate the user about the system and to make him familiar with it. His level of confidence must be raised so that he is also able to make some constructive criticism, which is welcomed, as he is the final user of the system.

2.3.3 Technical Feasibility:

This study is carried out to check the technical feasibility, that is, the technical requirements of the system. Any system developed must not have a high demand on the available technical resources. This will lead to high demands on the available technical resources. This will lead to high demands being placed on the client.

Literature Review;

1.V. Kastrinaki, M. Zervakis, and K. Kalaitzakis, "A Survey of Video Processing Techniques for Traffic Applications," Image and Vision Computing, vol. 21, no. 4, 2003, pp. 359–381.

Video-based traffic flow monitoring is a fast emerging field based on the continuous development of computer vision. A survey of the state-of-the-art video processing techniques in traffic flow monitoring is presented in this paper. Firstly, vehicle detection is the first step of video processing and detection methods are classified into background modeling based methods and non-background modeling based methods. In particular, nighttime detection is more challenging due to bad illumination and sensitivity to light. Then tracking techniques, including 3D model-based, region-based, active contour-based and feature-based tracking, are presented. A variety of algorithms including MeanShift algorithm, Kalman Filter and Particle Filter are applied in tracking process. In addition, shadow detection and vehicles occlusion bring much trouble into vehicle detection, tracking and so on. Based on the aforementioned video processing techniques, discussion on behavior understanding including traffic incident detection is carried out. Finally, key challenges in traffic flow monitoring are discussed.

2. E. Bas, M. Tekalp, and F.S. Salman, "Automatic Vehicle Counting from Video for Traffic Flow Analysis," Proc. IEEE Intelligent Vehicles Symp., IEEE Press, 2007, pp. 392–397.

We propose a new video analysis method for counting vehicles, where we use an adaptive bounding box size to detect and track vehicles according to their estimated distance from the camera given the scene-camera geometry. We employ adaptive background subtraction and Kalman filtering for road/vehicle detection and tracking, respectively. Effectiveness of the proposed method for vehicle counting is demonstrated on several video recordings taken at different time periods in a day at one location in the city of Istanbul.

3. G. Zhang, R.P. Avery, and Y. Wang, "A Video-Based Vehicle Detection and Classification System for Real- Time Traffic Data Collection Using Uncalibrated Cameras," J. Transportation Research Board, IEEE Press,2007, pp. 138–147.

On-board video analysis has attracted a lot of interest over the two last decades, mainly for safety improvement (through e.g. obstacles detection or drivers assistance). In this context, our study aims at providing a video-based real-time understanding of the urban road traffic. Considering a video camera fixed on the front of a public bus, we propose a cost-effective approach to estimate the speed of the vehicles on the adjacent lanes when the bus operates on its reserved lane. We propose to work on 1-D segments drawn in the image space, aligned with the road lanes. The relative speed of the vehicles is computed by detecting and tracking features along each of these segments, while the absolute speed of vehicles is estimated from the relative one thanks to odometer and/or GPS data. Using pre-defined speed thresholds, the traffic can be classified in real-time into different categories such as "fluid", "congestion"... As demonstrated in the evaluation stage, the proposed solution offers both good performances and low computing complexity, and is also compatible with cheap video cameras, which allows its adoption by city traffic management authorities.

4. S.C. Cheung and C. Kamath, "Robust Techniques for Background Subtraction in Urban Traffic Video," Proc. Int’l Conf. Visual Communications and Image Processing, Int’l Soc. for Optonics and Photonics (SPIE), 2004, pp. 881–892.

Video processing has become an efficient technique support for collecting parameters of urban traffic. Detection and tracking of multiple targets with an uncalibrated CCD camera is developed in this paper. In order to obtain moving targets from the video sequence efficiently, the paper presents mixture Gaussian background model based on object-level, and moving objects are extracted after background subtraction. Moving multi-targets are tracked through integration of the motion and shape features by Kalman filter modeling. In order to ensure the continuity and the stabilization, occlusion processing is performed. The proposed approach is validated under real traffic scenes. Experimental results show that detection and tracking are robust and adaptive, can be well applied in real-world

Existing System:

There are three major stages including vehicle detection, tracking, and classification in estimating desired traffic parameters of vehicles. Under assumption that the camera is stationary, most methods detect the vehicle by background subtraction or image difference. After that, several vehicle features, such as shape, aspect ratio, texture, etc. are extracted for classification. In the detected objects with temporal consistency are classified as the vehicles.

Drawbacks:

If the object is moving smoothly we'll receive small changes from frame to frame.

it's impossible to get the whole moving object. Things become worse, when the object is moving so slowly,

when the algorithms will not give any result at all.

Proposed System:

The focus of this work is to propose a vision-based system that can successfully detect and track vehicles on the highway scenes. The system overview of the proposed approach is shown. Firstly, we apply the background subtraction method to extract the possible foreground regions from the highway scene. The false regions are identified by introducing the geometric constraint as well as shadow pixels are seliminated. To construct the temporal correspondence between the detected vehicles at different times, we formulate the vehicle tracking

Advantage Of Proposed System :

It is much more simpler to understand;

The implementation of the filter is more efficient, so the filter produce better performance.  

The focus of this work is to propose a vision-based system that can successfully detect and track vehicles on the highway scenes. The system overview of the proposed approach is shown.

Requirement Specification

Hardware Requirements:

System : Pentium IV 2.4 GHz.

Hard Disk : 40 GB.

Monitor : 15 ".

Mouse : Logitech.

Ram : 1 Gb.

Software Requirements:

Operating system : - Windows XP Professional.

Front End : - Visual Studio.NET 2005

Coding Language : - Visual C# .NET.

Input :- Video

Modules:

Input Video

Background Extraction

Vehicle Detection

Vehicle Count

Module Description:

Input Video:

Freeways are originally designed to provide high mobility to road users. However, the increase in vehicle numbers has lead to congestion forming in freeways around the world. Daily recurrent congestion substantially reduces the freeway capacity when it is most needed. Expanding existing freeways cannot provide a complete solution to the congestion problem due to economic and space constraints.

II. Background Extraction:

 To extract the freeway background automatically enough number of successive frames must be available for processing. The automatic background extraction starts by processing the first three successive frames (images). The automatic background extraction results are very good and promising. The most effective parameters that are playing a main role for automatic background extraction are the threshold level and the dilation.

 III.Vehicle Detection:

To detect vehicles the extracted background must be subtracted from the current image Subtract the extracted background from the current image. Find the edge of the current image and the background image. Subtract the edge of the background image from the edge of the current image. The background is subtracted from the current image then the resulted image is filtered to get moving vehicles only. By using this technique most of vehicles are detected. Moving vehicles are detected easily after background is subtracted.

IV.Vehicle Count:

The numbers of successive frames are used to extract the background. Digital camera used to take shots. The camera placed over the highway directly. It shots six frames per second. The images are taken midday to decrease the effect of the vehicle shadow problems.

SYSTEM ENVIRONMENT

SOFTWARE DESCRIPSTION

OVERVIEW OF .NET

.NET is a "Software Platform". It is a language-neutral environment for developing rich .NET experiences and building applications that can easily and securely operate within it. When developed applications are deployed, those applications will target .NET and will execute wherever .NET is implemented instead of targeting a particular Hardware/OS combination. The components that make up the .NET platform are collectively called the .NET Framework.

The .NET Framework is a managed, type-safe environment for developing and executing applications. The .NET Framework manages all aspects of program execution, like, allocation of memory for the storage of data and instructions, granting and denying permissions to the application, managing execution of the application and reallocation of memory for resources that are not needed.

The .NET Framework is designed for cross-language compatibility. Cross-language compatibility means, an application written in Visual Basic .NET may reference a DLL file written in C# (C-Sharp). A Visual Basic .NET class might be derived from a C# class or vice versa.

The .NET Framework consists of two main components:

Common Language Runtime (CLR)

Class Libraries

Common Language Runtime (CLR)

The CLR is described as the "execution engine" of .NET. It provides the environment within which the programs run. It's this CLR that manages the execution of programs and provides core services, such as code compilation, memory allocation, thread management, and garbage collection. Through the Common Type System (CTS), it enforces strict type safety, and it ensures that the code is executed in a safe environment by enforcing code access security. The software version of .NET is actually the CLR version.

Working of the CLR

When the .NET program is compiled, the output of the compiler is not an executable file but a file that contains a special type of code called  the Microsoft Intermediate Language (MSIL), which is a low-level set of instructions understood by the common language run time. This MSIL defines a set of portable instructions that are independent of any specific CPU. It's the job of the CLR to translate this Intermediate code into a executable code when the program is executed making the program to run in any environment for which the CLR is implemented. And that's how the .NET Framework achieves Portability. This MSIL is turned into executable code using a JIT (Just In Time) complier. The process goes like this, when .NET programs are executed, the CLR activates the JIT complier. The JIT complier converts MSIL into native code on a demand basis as each part of the program is needed. Thus the program executes as a native code even though it is compiled into MSIL making the program to run as fast as it would if it is compiled to native code but achieves the portability benefits of MSIL.

fig2

Class Libraries

Class library is the second major entity of the .NET Framework which is designed to integrate with the common language runtime. This library gives the program access to runtime environment. The class library consists of lots of prewritten code that all the applications created in VB .NET and Visual Studio .NET will use. The code for all the elements like forms, controls and the rest in VB .NET

applications actually comes from the Class Library.

Common Language Specification (CLS)

If we want the code which we write in a language to be used by programs in other languages then it should adhere to the Common Language Specification (CLS). The CLS describes a set of features that different languages have in common. The CLS defines the minimum standards that .NET language compilers must conform to, and ensures that any source code compiled by a .NET compiler can interoperate with the .NET Framework.

Some reasons why developers are building applications using the .NET Framework

Improved Reliability

Increased Performance

Developer Productivity

Powerful Security

Integration with existing Systems

Ease of Deployment

Mobility Support

XML Web service Support

Support for over 20 Programming Languages

Flexible Data Access

DOTNET FRAMEWORK

COMPILATION AND EXECUTION

Source

Code

Compiler

IL Code

&

Meta Data

Linker

EXE

or

DLL

Base Class

Library

Class

Loader

Verifier

JIT

Compiler

Native

Code

Execution

Compilation

OVERVIEW OF C#

C# 2.0 introduces several language extensions, including Generics, Anonymous Methods, Iterators, Partial Types, and Nullable Types.

Generics permit classes, structs, interfaces, delegates, and methods to be parameterized by the types of data they store and manipulate. Generics are useful because they provide stronger compile-time type checking, require fewer explicit conversions between data types, and reduce the need for boxing operations and run-time type checks.

Anonymous methods allow code blocks to be written "in-line" where delegate values are expected. Anonymous methods are similar to lambda functions in the Lisp programming language. C# 2.0 supports the creation of "closures" where anonymous methods access surrounding local variables and parameters.

Iterators are methods that incrementally compute and yield a sequence of values. Iterators make it easy for a type to specify how the foreach statement will iterate over its elements.

Partial types allow classes, structs, and interfaces to be broken into multiple pieces stored in different source files for easier development and maintenance. Additionally, partial types allow separation of machine-generated and user-written parts of types so that it is easier to augment code generated by a tool.

Nullable types represent values that possibly are unknown. A nullable type supports all values of its underlying type plus an additional null state. Any value type can be the underlying type of a nullable type. A nullable type supports the same conversions and operators as its underlying type, but additionally provides null value propagation similar to SQL.

This chapter gives an introduction to these new features. Following the introduction are five chapters that provide a complete technical specification of the features. The final chapter describes a number of smaller extensions that are also included in C# 2.0.

The language extensions in C# 2.0 were designed to ensure maximum compatibility with existing code. For example, even though C# 2.0 gives special meaning to the words where, yield, and partial in certain contexts, these words can still be used as identifiers.

The C# foreach statement is used to iterate over the elements of an enumerable collection.

In order to be enumerable, a collection must have a parameterless GetEnumerator method that returns an enumerator. Generally, enumerators are difficult to implement, but the task is significantly simplified with iterators.

An iterator is a statement block that yields an ordered sequence of values. An iterator is distinguished from a normal statement block by the presence of one or more yield statements:

The yield return statement produces the next value of the iteration.

The yield break statement indicates that the iteration is complete.

OVERVIEW OF 3-TIER ARCHITECTURE

Web Sphere Application Server provides the application logic layer in a three - tier architecture, enabling client components to interact with data resources and legacy applications.

Collectively, three-tier architectures are programming models that enable the distribution of application functionality across three independent systems, typically

Client components running on local workstations (tier one)

Processes running on remote servers (tier two)

A discrete collection of databases, resource managers, and mainframe applications (tier three)

These tiers are logical tiers. They might or might not be running on the same physical server.

Three tier architecture

First tier:

Responsibility for presentation and user interaction resides with the first-tier components. These client components enable the user to interact with the second-tier processes in a secure and intuitive manner.

Web Sphere Application Server supports several client types. Clients do not access the third-tier services directly. For example, a client component provides a form on which a customer orders products. The client component submits this order to the second-tier processes, which check the product databases and perform tasks that are needed for billing and shipping.

Second tier:

The second-tier processes are commonly referred to as the application logic layer. These processes manage the business logic of the application, and are permitted access to the third-tier services. The application logic layer is where most of the processing work occurs. Multiple client components can access the second-tier processes simultaneously, so this application logic layer must manage its own transactions.

In the previous example, if several customers attempt to place an order for the same item, of which only one remains, the application logic layer must determine who has the right to that item, update the database to reflect the purchase, and inform the other customers that the item is no longer available. Without an application logic layer, client components access the product database directly. The database is required to manage its own connections, typically locking out a record that is being accessed. A lock can occur when an item is placed into a shopping cart, preventing other customers from considering it for purchase. Separating the second and third tiers reduces the load on the third-tier services, supports more effective connection management, and can improve overall network performance.

Third tier:

The third-tier services are protected from direct access by the client components residing within a secure network. Interaction must occur through the second-tier processes.

Communication among tiers:

All three tiers must communicate with each other. Open, standard protocols and exposed APIs simplify this communication. You can write client components in any programming language, such as Java or C++ or C#. These clients run on any operating system, by speaking with the application logic layer. Databases in the third tier can be of any design, if the application layer can query and manipulate them. The key to this architecture is the application logic layer.

SYSTEM ANALYSIS

SYSTEM testing:

Software testing is an important element of Software quality assurance and represents the ultimate review of specification, design and coding. The increasing visibility of S/W as a system element and the costs associated with Software failure are motivating forces for well planned, through testing.

Though the test phase is often thought of as separate and distinct from the development effort--first develop, and then test--testing is a concurrent process that provides valuable information for the development team.

There are at least three options for integrating Project Builder into the test phase:

Testers do not install Project Builder, use Project Builder functionality to compile and source-control the modules to be tested and hand them off to the testers, whose process remains unchanged.

The testers import the same project or projects that the developers use.

Create a project based on the development project but customized for the testers (for example, it does not include support documents, specs, or source), who import it.

Testing objectives:

There are several rules that can serve as testing objectives.

They are

Testing is a process of executing a program with the intent of finding an error.

A good test case is one that has a high probability of finding an undiscovered error.

A successful test is one that uncovers an undiscovered error.

If testing is conducted successfully according to the objectives stated above, it will uncover errors in the software.

Types of Testing:

Testing is the process of executing the program with the intent of finding errors. Testing cannot show the absence of defects, it can only show that software errors are present. The Testing principles used are

Tests are traceable to customer requirements.

80% of errors will likely be traceable to 20 % of program modules

Testing should begin ‘in-small’ and progress towards testing ‘in large’.

White Box Testing:

This test is conducted during the code generation phase itself. All the errors were rectified at the moment of its discovery. During this testing, it is ensured that

All independent paths within a module have been exercised at least one

Exercise all logical decisions on their true or false side.

Execute all loops at their boundaries.

Black Box Testing:

It is focused on the functional requirements of the software. It is not an alternative to White Box Testing; rather, it is a complementary approach that is likely to uncover a different class of errors than White Box methods. It is attempted to find errors in the following categories.

Incorrect or missing functions

Interface errors

Errors in data structures or external database access

Performance errors and

Initialization errors.

It is already stated that the methodology used for program development is the ‘Component Assembly Model’. Before integrating the module-interfaces, each module-interface is tested separately. This is called Unit Testing.

Unit Testing:

This is the first level of testing. In this different modules are tested against the specifications produced during the design of the module. During this testing the number of arguments is compared to input parameters, matching of parameter and arguments etc. It is also ensured whether the file attributes are correct, whether the Files are opened before using, whether Input/output errors are handled etc. Unit Test is conducted using a Test Driver usually.

Integration Testing:

Integration testing is a systematic testing for constructing the program structure, while at the same time conducting test to uncover errors associated within the interface. Bottom-up integration is used for this phase. It begins construction and testing with atomic modules. This strategy is implemented with the following steps.

Low-level modules are combined to form clusters that perform a specific software sub function.

The cluster is tested.

Drivers are removed and clusters are combined moving upward in the program structure.

Alpha Testing:

A series of Acceptance tests were conducted to enable the employees of the firm to Validate requirements. The End User conducted it. The suggestions, along with the additional requirements of the end user were included in the project.

Beta Testing:

It is to be conducted by the end-user without the presence of the developer. It can be conducted over a period of weeks or month. Since it is a long time consuming activity, its result is out of scope of this project report. But its result will help to enhance the product at a later time.

Validation Testing:

This provides the final assurance that the software meets all functional, behavioral and performance requirements. The software is completely assembled as a package. Validation succeeds when the software functions in which the user expects.

Output testing:

After performing the validation testing, next step is output testing of the proposed system since no system could be useful if it does not produces the required output generated or considered in to two ways. One is on screen and another is printed format. The output comes as the specified requirements by the user. Hence output testing does not result in any correction in the system.

User Acceptance testing:

User acceptance of a system is the factor for the success of any system. The system under consideration is tested for the user acceptance by constantly keeping in touch with the prospective system users at the time of developing and making changes wherever required.

Input screen design

Output screen design

On-line message to guide the user

Format of the ad-hoc reports and other outputs.

Taking various kinds of test data does the above testing. Preparation of test data plays a vital role in the system testing. After preparing the test data the system under study is tested using the test data. While testing the system by using test data errors are again uncovered and corrected by using above testing steps and corrections are also noted for future use.

TEST REPORT BY SYSTEM ANALYST / PROGRAMMER

S.NO

TESTING PARAMETER

OBSERVATIONS

1.

INTERFACE TESTING

Mouse / Tab Navigation

User Friendliness

Consistent menus

Consistent Graphical buttons

OK

OK

OK

OK

2.

VALIDATION TESTING

Check for improper or inconsistent typing

Check for erroneous initialization or default values

Check for incorrect variables names

Check for inconsistent data types

Check for relational / arithmetic operators

OK

OK

OK

OK

OK

3.

DATA INTEGRITY/SECURITY TESTING

a) Data Insert /Delete /Update

b) Condition(Underflow,

Overflow exception)

c) Check For Unauthorized

Access of data

d) Check For Data Availability

OK

OK

OK

OK

4.

EFFICIENCY TESTING

a) Throughput of the System

b) Response Time Of the

System

c) Online Disk Storage

d) Primary Memory Required

by the System.

OK

OK

OK

OK

5.

ERROR HANDLING ROUTINES

a) Error Description are

Intelligent / Understandable

b) Error Recovery Is Smooth

c) All Error handling routines

are tested and executed

at least ones.

OK

OK

OK

USER TEST REPORT (to filled by user)

S.NO

TESTING PARAMETER

OBSERVATIONS

1.

TEST FOR PULLED DOWN MENUS AND MOUSE OPERATIONS

a) All the relevant Pull Down

Menus, Scroll Bar, Dialog

Boxes and Buttons Functioning

Properly?

b) Is the Appropriate menu bar

displayed in the appropriate

Context?

c) Are all menu functions and pull

down Sub-Functions properly

listed?

d) Does each menu function

perform according to design

Specification?

e) Is it Possible to invoke each

menu functions using it is

alternative keys?

f) Is all data content with in the

Window Properly addressable

with mouse, Functional Keys

and Keyboard?

g) Does the Window Properly

generate when it is overwritten

then Recalled?

Is the active Window Properly

Highlighted?

YES

YES

YES

YES

YES

YES

YES

YES

2.

TEST FOR DATA ENTRY LEVEL :

a) Is alphanumeric data entry

properly echoed and input to

the system?

b) Do graphical modes of data

entry such as Scrollbars work

properly?

c) Are data input messages

intelligible?

d) Is invalid data properly

recognized?

f) Is all input data entry properly

Saved?

YES

YES

YES

YES

YES

3.

TEST FOR VERIFYING OUTPUTS

a) Whether Output displayed is

according to requirement and

printed with proper

alignment?

b) If Calculations are there, do

you check them?

c) Are Report formats according

to need?

Are reports can be Printed or

printer?

YES

YES

YES

YES

Architecture Diagram:

INPUT DESIGN:

Input design is a part of overall system design that requires special attention because it is most common source of data processing errors. The goal of designing input data is to make the data entry easy and free from errors.

The privacy protected query processing of the user has been designed in a way in which the user’s information is protected from hacking. The query of the user is sent to the location based server and it is has been processed. The form has been designed in such a way to process the query and to enter the query of the user. The cursor is the place where the data must be entered into the form.

Select the Video and Load Video

Videos are converted in to frame

5.8 OUTPUT DESIGN:

Computer output is most important and direct source of information to user. Output design aims at communicating the results of processing with user and management. The application is successful only when it generates effective reports.

In order to improve the system relationship with user, the efficient and effective outputs have a soft copy or a hard copy of the report according to his/her used. Output design aims at communicating the results of processing with user and management

Vehicle Tracking and Counting in Video

Calaulate object Height & Width

Fore ground & Background process

Video Clip

Gray scale Image conversionSystem Implementation

create road elements like segments and objects on roads. The segments include nodes and edges. The road elements include query points and data points. Data points include position of data, their locations and access ability from different locations

ALGORITHM

AUTOMATIC BACKGROUND EXTRACTION:

To extract the freeway background automatically enough number of successive frames must be available for processing. The automatic background extraction starts by processing the first three successive frames (images) as in the following steps:

Step1. Take a movie for the freeway and then convert it to number of successive frames

(images).

Step2. Use the first three successive frames Ct-2, Ct-1, Ct to calculate the differences

Dt-1, t-2= |Ct-1- Ct-2| , and Dt,t-1= |Ct- Ct-1|.

Step3. Specify the gray threshold level T.

Step4. Convert the differences to binary depend on the threshold.

Step5. Calculate the Difference Product (DP) using the bitwise logical AND operation:

DPt = DBt-1,t-2 & DBt,t-1.

Step6. Apply binary dilation (DLT) of DPti,j .

Step7. Apply image close.

Step8. Calculate moving object region (MOR) by filtering the closed image.

Step9. Fill the moving object region.

Step10. Estimate the initial background B(kk) and store this region information at the EF

(Extraction Flag). B(kk) = MOR(kk)|Ct, where the symbol ‘|’ is the bitwise logical OR

operator. And, EF(kk) = MOR(kk).

Step11. For the first three successive frames calculate MOR for current input image and

calculate background extraction target area (ETA). ETA(kk) = EF(kk-1)& MOR(kk ) ,

where MOR(kk) = 1’s complement of MOR(kk).

Step12. For the subsequent frames extract background pixels in the current input image and

update EF. B(kk) = B(kk-1) & (Ct ,| ETA(kk) ), EF(kk) = EF(kk-1) ⊕ ETA(kk) , where

is the bitwise logical XOR (exclusive XOR) operator.

Repeat steps 1-12 till get the background.

VEHICLES DETECTION:

To detect vehicles the extracted background must be subtracted from the current image as in the following steps:

Step1. Subtract the extracted background from the current image.

Step2. Find the edge of the current image and the background image.

Step3. Subtract the edge of the background image from the edge of the current image.

Step4. Fill the resulted images in steps 2&3. Implement logical and operation for the results in

Steps 2&3.

Step5. Filter the resulted image.

Step6. Count the resulted moving vehicles.

CONCLUSION

In this project, we have presented a novel computer vision system devised to track and classify vehicles with the aim of replacing ILDs, particularly on highways. The system has been tested with different kinds of weather conditions (including rainy and sunny days that make passing vehicles cast shadows), obtaining similar results to ILDs.

Additionally, this system distinguishes itself from other computer-vision-based approaches in the way in which it can handle casted shadows without the need for any hardware other than cameras, such as GPS to estimate the direction of the shadows.

Hence, we believe that this is a viable alternative to replace ILDs, other technologies such as tags installed in vehicles, laser scanners that reconstruct the 3-D shape of the vehicles, or other computer-vision based approaches, whose installation and maintenance are more cumbersome than using cameras only. GPU and multi core programming allow us to achieve real-time performances and with off-the-shelf hardware components.

FUTURE ENHANCEMENT:

To extend the approach to viewpoints in which more severe occlusions may occur, it would be necessary to include an interaction model between objects that can be achieved by adding Markov Random Field factors to the posterior distribution expression. Therefore, even when the image projections of two or more vehicles intersect, the 3-D model understands that they cannot occupy the same space at the same time.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now