Image Processing Based Car Security System

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

The issue of vehicle theft have been an unending challenge even with recent advancement in car security systems such as the use of immobilisers, Tracking systems and modified alarm systems which have in recent time occasionally constituted nuisance in the environment being triggered even in the event of no safety issue, however irrespective of these advancement the issue of car crime has not ended.

This project strives in offering an advance car security system that uses the results of a facial recognition in gaining an ignition response. The project focuses on the utilization of the face image processing in restricting access and assigning speed Limits to even registered users on the system. To meet the overall aim of this project a wide range of research was done covering modern car security system, a detailed study on the principle behind face recognition, a survey on the type of embeddable hardware suitable for the application. The project further simulates the car security system using MATLAB and also embeds the facial recognition program into a Raspberry Pi (RPI) in order to demonstrate its real time compliance.

Table of Contents

[ This table is constructed automatically by Word, by collecting items set in Heading 1, 2 or 3 style. To update the table, right-click on it and select "Update Field" > "Update entire table". ]

Abstract Error: Reference source not found

Table of Contents 2

Additional Materials on the Accompanying CD 4

Acknowledgements 5

1 Introduction Error: Reference source not found

1.1 Background to the Project Error: Reference source not found

1.2 Project Objectives Error: Reference source not found

1.2.1 (eg) Gather client and user requirements Error: Reference source not found

1.2.2 (eg) Analyse and model the requirements Error: Reference source not found

1.2.3 (eg) Investigate possible solutions, etc, etc Error: Reference source not found

1.3 Overview of This Report Error: Reference source not found

2 Investigation Error: Reference source not found

3 Methodology 19

4 Requirements Error: Reference source not found

5 Analysis Error: Reference source not found

6 Design Error: Reference source not found

7 Implementation Error: Reference source not found

8 Testing Error: Reference source not found

9 Project Management Error: Reference source not found

9.1 Project Schedule Error: Reference source not found

9.2 Risk Management Error: Reference source not found

9.3 Quality Management Error: Reference source not found

10 Critical Appraisal Error: Reference source not found

11 Conclusions Error: Reference source not found

11.1 Achievements Error: Reference source not found

11.2 Future Work Error: Reference source not found

12 Student Reflections Error: Reference source not found

Bibliography and References Error: Reference source not found

Appendix A – Project Specification AError: Reference source not found

Appendix B – Interim Progress Report BError: Reference source not found

Appendix C – Requirements Specification Document CError: Reference source not found

Appendix D – User Manual DError: Reference source not found

Appendix X – As required XError: Reference source not found

Additional Materials on the Accompanying CD

[ List here the items included on the CD (in addition to this report). Depending on the project, these could include client/user interview notes, survey data, detailed requirements and use-case specifications, system and software design models (preferably in UML), full database design information, test data and results, a substantial user manual and/or administrator’s guide, etc. You should also include in a folder on the CD the complete set of source code and other files needed to build and run the software system you have produced. ]

Acknowledgements

I hereby acknowledge the Nigerian Niger Delta Development Commission, Engr. Edward Moore and Blessing King, for the financial aid granted to support my study here in Coventry University. I do acknowledge my Project Supervisor, Kennedy Iroanusi whose guidance is instrumental to the completion of this work; I appreciate his time and effort in leading me to this point of success. I also acknowledge Osabiku Ugochukwu Jnr, Obehioye Okonofua and all my class mates who have supported me to this very point.

I also do acknowledge my lecturers who have thought me Dr Andrew Tickle, Mr Mark Oliver, Dr Richard Rider, Dr Norlaily Yaacob, Josef Grindley and all other whose teaching effort have led to the completion of my project and giving me confidence in understanding the possibility of attaining greater heights.

1 Introduction

This chapter offers the reader with the background information on the importance of an Image Processing Based Car Security system for car owners. The overall aim and individual project objectives are stated and also specifies and discusses the problem statement and scope of this project.

Background to the Project

Increase in car theft has always been an issue not just in undeveloped countries, but also in developed world irrespective of several types of security systems put in place to control it. Criminal masterminds adopt numerous strategies to beat most common car security application and systems. According to the Stolen Motor Vehicle (SMV) database the total number of cars stolen as at the end of December 2011 was about 7.1 million worldwide; which is more than the population of some countries. Several security measures have been developed over the years in response to this issue, most popular of such security system is the alarm system which turns out to be inept.

This project uses an image processing technique in the design of a car security system, using face recognition as a tool to gain access and control in a car system. The Car is programmed to recognize its owner and grant access, and also to set speed controls where necessary.

Parents have a duty of care to monitor and keep their children safe, but doing this effectively presents a challenge as it is likely that the parents may not be present at all times with their children (young drivers); Statistics reported in areas in the UK such as Wales that young drivers involved in car accidents in 2008 were about 2969 individuals (National Statistics Ystadegau Gwladol 2008). In 2011 2776 young people were killed or seriously injured on the UK roads (ITV 2012). This project will enable parents to have a certain control over the speeding habits of younger drivers. The image processing based car security system is able to identify multiple users and assign speed limits as required by the Master User. The system grants access to multiple authorized users and utilizes the already set speed limit determined by the master user.

The machine learning algorithm utilized in the face detection is the viola-jones method implemented in MATLAB for the purpose of design and also implemented using the free computer vision C\C++ library OpenCV for the purpose of embedding in hardware. The principal component analysis (PCA) is used in the face recognition which presents a quick and easy approach in dimensionality reduction and pattern matching. The system is designed to detect the test face that is an input face that enters the car, store the individuals face and then run a comparison with a train set which are images already saved in the system. If a match is found then the control is set to a speed limit assigned by the Master user, and if a face image is not recognized in the training set then the car would give no access to the unrecognized person triggering an alarm as well.

The hardware is designed using an ARM1176JZF-S 700MHz processor on a BCM2835 system on a chip (SoC) popularly known as the Raspberry Pi; which is interfaced with a through the lens (TTL) serial JPEG coloured camera. The Raspberry pi communicates with the TTL camera serially in a handshaking manner.

Aims and Objectives

The aim of this project is to design an image processing based car security system using face recognition to gain access and control to the car so as to alleviate the issue of car theft and also to serve as supervisory control for young drivers.

Objectives;

To choose an already existing facial recognition technique by considering and analysing its real life practical implementation and feasibility.

To improve on the application of the selected algorithm based on information processing model for this particular application.

To embed the facial recognition software program into a hardware for a real-time practical demonstration.

Design a car security system that grants access via facial recognition.

Problem Statement

Access to a vehicle is usually via a key system of which in most security conscious design is attached to an alarm system; this has not ended the issue of car theft as car thieves have developed strategies to override the ignition system. The use of image processing based software approach in identifying who should have access or not while starting a car is not popularly available. This is why a thief can get away in split seconds if he or she can turn on the car via the ignition. Also relying on the alarm system has not been adequate as making it too sensitive constitute nuisance even when there is no attempt to steal the car and if tuned with a lower sensitivity the alarm might not even come on in the event of an earthquake. The image processing based security system is designed not only to minimize car theft but also to retrieve the facial identity of Vehicle thieves; this would help law enforcement agencies to remove the car thieves off the streets as they are responsible for all this security measures in the first place.

Also it is quite a challenge for parents to monitor the speed limit of their young when they are left unsupervised, however such assurance can be granted with the image processing based car security system by making it possible for an extended supervisory control via settings on the system.

Scope of Project

This project investigates the various facial recognition algorithms in later chapter and simulates the selected algorithm using MATLAB. The project is implemented using the Raspberry Pi interfaced with a TTL JPEG camera via the UART GPIO port of the raspberry pi. The TTL camera is required to capture image in real-time, the image captured by the sensor is saved on the Raspberry pi where the image processing is done. The Raspberry pi Signals the User the results of the processing that is if the image is recognized or not, the Raspberry pi does this by sending a message to the User of the unrecognized subject or if the subject is recognized in its Database, grants access to the subject and also sets the specified speed limit. The speed limit is implemented by displaying the predefined speed limit for the subject recognized. Some major limitation of this project is the effect of Lighting on the accuracy of the project however the project testing is carried out under a fixed lighting condition.

1.5 Overview of this Report

Chapter 1 provides the reader with background information on the need to improve car security systems and also some challenges caused by the inability to effectively monitor young drivers to avoid over speeding which in turn leads to accidents. It further discussed the problem statement and scope of the project as well as stating clearly the projects objectives.

Chapter 2 investigates the existing car security system and identifies some problems with the system causing them not to be efficient. The chapter further investigates the Principles and techniques of facial recognition that are used to achieve the project stated objectives and also a survey leading to the choice of hardware platform used for this project.

Chapter 3 discusses the model used in the project design software (Waterfall model) and why it was used, it also discusses the choice of environmental setup and the methods used in the hardware design of the project.

Chapter 4 is the design process of both hardware and software, identifying and listing the software user requirements and also illustrates the software architecture of the system.

Chapter 5 describes the implementation of the Image Processing Based Car Security System by simulations on MATLAB and also a prototype for the system.

Chapter 6 is concerned with the testing of the system functionality and the reasons behind the choice of the testing technique used.

Chapter 7 shows how the project was managed within the specified duration giving a work break down structure of how the project sub task was handled. It also shows the risk management and quality management of the project.

Chapter 8 is the critical appraisal; the discussion on the outcome of this project was done detailing the strength and limitation of the project device.

Chapter 9, this chapter further highlights the objectives and illustrates how the objectives were met. A conclusion was drawn from the outcome of the project and further work needed to be done was also discussed.

2 Investigations

2.1 Car Security Systems

Over the years there have been various approaches to securing a vehicle such as the use of Car alarms, Car trackers and immobilisers yet the issue of vehicle theft is still on the rise. Other approaches to car security system studied such as the car monitoring using Bluetooth security system by Rashidi, Ariff and Ibrahim (2011), utilized the Bluetooth module communication via mobile phone to receive, turn ON or OFF the security system. Likewise (Porta and Sanchez 2006) used Bluetooth/GMRS Car security system with a randomly located movement detect device to tackle the issue of car theft by sending a notification to the user if any movement is detected, notification is sent via GMRS. These two systems monitor and control an alarm system. The alarm based systems are based on sensors that detect movement or touch and sound alarm in response to the results of its sensing, these systems are gradually losing their appeal as it has become a common sighting in urban environments for car alarms to come ON unnecessarily. In (Sehgal et al. 2012) GSM based car security system; the car owner receives an alert via GSM in event of an intrusion. The mode of detecting if there is an intrusion is very vital and also the information sent. The message passing car security system described have added additional feature to the alarm type system such that if the user is not within hearing distance of the alarm or in the event of multiple vehicles having an alarm system to distinguish which car alarm is set, alert can be received via Bluetooth or GSM network. Alert sent contains only the notice that there is a possible intruder but does not contain any information on the intruder. Information on the intruder however is necessary as it could be a used to identify car thieves in a community.

Car security device which produces information about the intruder is however proposed as a clever approach (Shaikh and Kate 2012) to tackle the issue of car theft. Shaikh and Kate in their work built a prototype car security system based on an embedded platform ARM7, the system consist of a face detection system, a GPS module, and a GSM module. The system operate such that if there is an entry to the car it captures the image and compares it with a set of predefined images and if the image does not match the system sends information via MMS to the car owner and also the owner can know the car location via the GPS module. Similarly (Bagavathy, Dhaya and Devakumar 2011) in their work used an ARM processor using Principal Component Analysis-Linear Discriminant Analysis algorithm in implementing the facial recognition algorithm for a real time car theft decline System, these approaches are however more advanced to the usual alarm systems.

The system designed in this project implements the principal component analysis in face recognition and adds the additional functionality of setting speed limit for every authorized user by a Master User, this aids the Master User to guarantee supervision of speed limits in his/her absence. This is useful in a Parental- Child Control situation, for the Master User (Parent) specifying speed limit to guarantee the safety of his/her child in an unsupervised scenario.

2.2 Image processing based security systems

Image processing is a form of signal processing which utilizes an image input, in this context refers to the processing of digital images. The processing of digital images in this context refers to what we do with the input image such as filtering, resizing, and colour extraction. The aim of a security system is to safeguard a person or object by access restriction, satisfying the requirements for developing quality access restriction provides an efficient security system. To guarantee access restriction, biometrical techniques have been commonly used and applied to a wide range of security applications and thus image processing is utilized to process the acquired digital image. The image processing based system types used in security systems include iris recognition, fingerprint recognition, palm recognition, retinal recognition and face recognition system. In implementing the car security system, the face recognition approach is chosen because it is the least intrusive from a biometric sampling point of view, because they neither require contact nor the awareness of the subject (Majekodunmi and Idachaba 2011).

2.3 Face Recognition

Face recognition is an important ability possessed by humans; even an infant responds to face shapes after birth and can discriminate his or her mother’s face from a stranger at a tender age of 45 hours (Voth 2003). This is not the case in machine recognition, getting a machine to recognise human faces or objects brings to light machine learning techniques which is a broad field of artificial intelligence, artificial intelligence aims at mimicking intelligent abilities of humans by machines(Rätsch 2004). The machine learning algorithms to differentiate within exemplars are developed to enable machines differentiate objects, objects are seen as patterns in machine learning.

Face Detection

Feature Extraction

Face Recognition

Identification/Verification

Input Image/Video

Face recognition system can be divided into three main steps in achieving its purpose, face detection, feature extraction, and face recognition (Zhao et al. 2003).

Figure 2.0 Face recognition steps

2.3.1 Face detection

Face detection is the discovery of faces contained in an image by a machine, it involves algorithms used in identifying the sub-region of an image containing faces and aids in the alignment of the face image. There are various face detection algorithms some of which include the Intel computer vision library which contains extended realization of viola-jones object detection algorithm, Face detection library(FDLib) developed by Keinzle et al. (Degtyarev and Seredin 2010) and Face detection algorithm developed in University of Surrey known as the Algorithm UniS. However this project focuses on OpenCV extended Viola jones algorithm as it is the most popular, free algorithm and is of relatively very good performance (Degtyarev and Seredin 2010).

2.3.2 Viola Jones Algorithm

2.3.2.1 Haar- like Features

The viola jones algorithm uses rectangular features known as the Haar-like features in detecting a faces. Each haar-like feature has a white and dark side in the rectangle as shown in Figure 2.0 below, and each of the feature results in a single value which is calculated by subtracting the sum of pixels in the white region of the haar-feature from the sum in the black region.

(e)

(d)

(c)

(b)

(a)

Figure 2.1 Examples of some haar-features

The haar-features are scanned around an input image to match areas that are similar to the feature. The haar-object is matched to areas in the face that are similar to it. This is further illustrated in the Image below.

Figure 2.2 Illustrates the matching of Haar-like feature (Viola and Jones 2001).

In the haar-like features the black region is assigned a value of +1 while the white region is assigned a value of -1, Considering the image in Figure 3.0, the (3a) is the input image and (3b) is similar to the haar-like feature in figure (2b), the sum of all the pixel values in the black region is computed likewise the sum of all pixel values in the white region. Next the sum of white region is subtracted from the black region and a singular value is outputted. The haar-feature in (3b) shows that in the input image the eye region is darker than the surrounding face when that particular haar-like feature is applied, while in (3c) the haar-feature matches the nose feature signifying that the nose region is brighter than the surrounding which will reflect in higher pixel values in the region when that particular haar-like feature is used to scan through the image.

However the general idea is that when a particular haar-feature is applied in an image, high values are gotten in the pixels where the haar-feature pattern matches. A glance at figure 2.1c above shows that the haar-feature applied can be used to extract the bridge of the nose.

2.3.2.2 Integral Image

The Viola-jones algorithm uses a 24x24 base resolution to evaluate the haar-features (Viola and Jones 2001); if we apply all the possible haar-features the exhaustive set of rectangle features is quite large, 45,396. So the Adaboost algorithm is used to speed up the process and feasibility by eliminating some features that are considered redundant.

Viola and Jones employed a smart technique which they referred to as getting an integral image, this is used to aid the computational complexity in repeatedly calculating the sum of pixels in the haar region, and this is illustrated below.

Figure 2.3 the concept of integral image (Viola and Jones 2001).

Digital images are represented in a matrix form during processing. In Figure 2.3a the shaded number is in position (3, 2) of the 3x3 matrix, if considering that the haar-like feature falls on the part of the image enclosed in red and it is required to get the sum of the region, definitely the sum of the region is equal to 6 but doing the computation with larger values than 1 and for a large number of haar-feature in different positions is computationally challenging so each pixel is represented with its integral value. This means that the value in Row 3 Column 2 that is position (3, 2) of the matrix is represented with 6 and using the same L shaped ruler-like object, placing it in every position encloses a certain set of numbers in the matrix, for example if placed in position (3, 1) then the integral value when adding all the numbers enclosed is 3 as shown in the figure (4b) for each of the position. The replacement of every pixel of an input image with its integral value as illustrated in figure (4) is the integral concept. This reduces computational complexity as explained in figure (5) below.

Figure 2.4 Explanation of Integral Image

Considering the image with regions A to D, if a haar-feature falls on the pixels in the region D, to get the sum of the region as explained above we need to calculate the sum of all the pixels in the region, but if the figure is the integral image and the numbers 1, 2, 3 and 4 are the values at the end of each haar-feature, then the Pixel sum in the region can be retrieved easily by

Sum of all pixels in D = 1 +4 – (2 +3)

= A + (A+B+C+D) – (A+C+A+B) (1)

= D

This improves the computational efficiency as it is only required to use the integral pixel values at the four edges of the region of interest to compute the sum of pixels in the region.

2.3.2.3 Adaboost Algorithm and Cascading

Adaboost stands for Adaptive boosting (Kim et al. 2011); it was used in the Viola-jones work to reduce the number of haar features needed to be used for the face detection. Since only few sets of the haar-features were needed among all the other features, the determination of which feature is relevant or irrelevant was done by the Adaboost algorithm. This is a machine learning algorithm which is utilized in determining only the best haar-features. It assigns weights to the features after identification and a linear combination of the feature is used to make a decision on which image is a face or not.

(2)

Weak Classifier

Weights

Strong

Classifier

Image

The strong Classifier is a linear combination of the weighted simple weak classifiers, each of the haar-features may be seen as a simple weak classifier, the AdaBoost uses an iterative algorithm to select new weak classifiers and each image receives a weight signifying its importance. The algorithm majorly performs the haar-feature selection using an adaptive constructs to obtain a final strong classifier.

The basic principle behind the use of cascading is to immediately reject a face or window, it scans the detector with a changing size on every scan, and it is designed to spend more time on probable desired images discarding undesirable (non-face) images quickly. All features are grouped into several stages each of the stages containing a number of features. Each of the stages determines if a sub-window is definitely a face or not, and only the sure faces and probable faces are allowed to move to the next stage. The sub-image part that are sure non-faces in any of the training stages are discarded. This is shown in Figure (6), only likely faces trained in each stage is allowed to proceed to the next stage, this helps in reducing the computational time.

Figure 2.5 Block diagram illustrating the principle of Cascading

Within each cascade, the Adaboost is used in designing every single stage, and if the image passes all the stages, then the image is classified as a face, but if it fails in any of the stages it is not passed to the next stage, it will be classified as a non-face and is discarded.

2.3.3 Feature Extraction

There are various feature extraction algorithm many of which are not specifically designed for face recognition but have been modified to suit the purpose. In (de Carrera 2010) some feature extraction algorithms include

Principal Component Analysis (PCA) based on Eigenvectors, Linear Map

Linear Discriminant Analyses (LDA) Eigenvector-based but supervised Linear Map

Kernel LDA or PCA- Eigenvector based, non-linear map, uses kernel methods

Independent Component Analysis (ICA) – Linear map, and separates non-Gaussian distributed feature.

Neural Network based Methods – Different category of neural networks using PCA, etc.

2.3.4 Face Recognition Algorithms.

A lot of face recognition techniques have been developed over the past few decades, Zhang et al (2012) reviewed three facial recognition algorithm, Principal Component Analysis (PCA), Linear Discriminant Analysis, and Elastic Bunch Graph Matching (EBGM) using MATLAB, the testing criteria was accuracy, processing speed and tolerance, his findings claimed that LDA and EBGM performed better than PCA except that EBGM has a very slow computational time and would pose unlikelihood to be feasible in a real time system, however in terms of accuracy EBGM was fairly comparable with LDA depending on the data base that was used for the testing. Zhang et al thus show that PCA and LDA are both considerable in developing a timely system. Other face recognition algorithms include the Independent component analysis (ICA), Adaptative Appearance Model (AAM), Neural Network with Gabor Wavelet (de Carrera 2010). However PCA is implemented in this project because of its simplicity and speed, and also considering that database to be used is small, Martinez and Kak (2001) illustrated how PCA might outperform LDA when the number of samples per class is small or when the training data samples the underlying distribution non-uniformly.

Principal Component Analysis (PCA)

The goal of PCA is dimensionality reduction; it is a data-reduction method that re-expresses a raw data with an alternative set of parameters such that the noise and redundancy of the data is kept minimal. It uses an orthogonal transformation to convert a set of input images (Faces) into a smaller number of Eigenvectors of these faces known as Eigenfaces (Turk and Pentland 1991).

The data noise and redundancy is expressed by a covariance matrix such that an image with m pixels can be re-expressed as having m rows, one column matrix (Zhang et al 2012).

(3)

The covariance matrix is defined as

= (4)

The noise and the redundancy are reflected in the covariance matrix and the diagonal of the matrix is the variance of each pixel, a relatively large value indicates that most information of the image is entailed inside while a smaller value shows that element can be neglected and is less important. This shows that in PCA problem the interest is in maximizing the diagonal elements and minimizing the off-diagonal elements (Zhang et al 2012) which can be achieved by getting a transform matrix, having orthogonal vectors to diagonalize the covariance of the transformed matrix.

This means that if we have M number of faces given as;

……… then the mean of all faces is defined as;

= (Turk and Pentland 1991). (5)

Each of the faces differ from the mean face by the vector = -

This arises in a set of very large vectors

C = = (6)

=

Where A = [ ……..] is by M matrix, while C is an matrix, this is computationally cumbersome, this was however solved as follows

M from the equation represents the number of input faces (Training Image)

M < < , the value of M is far less that

In order to perform the Karhunen–Loève transform (KLT), the eigenvectors and eigenvalue are computed for the covariance matrix C (C=), the dimensionality of the cumbersome matrix is reduced using the decomposition described in (Kirby and Sirovich 1990).

The eigenvector computed U =, they are normed and sorted in decreasing order to the corresponding eigenvalues. After which the vector is transposed and arranged to form a row vector of the transformation matrix T (Perlibakas 2004).

This allows us to project any given data X into the Eigenspace using the formula:

, (7)

Where X =, Y =.

The transformation above is defined in such a way that the first uncorrelated face image (eigenface) exhibits the most dominant features among all other eigenface. The dominant features are arranged in a descending other from the most dominant to the least dominant feature.

A survey on Microcontrollers and Embedded Platforms

The reason for this survey is to select a suitable hardware for this project, by the readily available hardware’s that could be used for this purpose based on cost and other additional features that can be offered.

A Microcontroller unit (MCU) is a single integrated circuit that is used to execute a user program, in most cases for the purpose of controlling a device(s) (Steiner 2005). The use of microcontroller in system design comes with different benefits such as portability, requires low power to operate, low cost advantage and has All-in-one that is it comprises of a CPU, ROM, RAM and I/O ports inbuilt to execute a dedicated task. A lot of microcontroller manufacturers are available some of which include Advance RISC machines (ARM), Atmel, Intel-8051/8052 cores, Microchip and Texas Instruments. The choice of a microcontroller is dependent on certain factors which include (Takawira and Dawoud 2007);

Cost: The cost of the selected microcontroller and the components that are required to fulfil the requirement of the system being designed is compared.

Performance: The performance here refers to the response time, the time required to complete a task is also required. Some applications require very fast response time and selecting such a system the performance choice is highly considerable.

Power: This refers to the power consumption of the system as it may determine the lifetime of a battery or the cooling requirement of an Integrated circuit.

Flexibility: This checks the ability of functionality of the system to be changed without inquiring additional heavy cost.

Reliability, Availability and Serviceability (RAS): Reliability checks on the attribute of the Microcontroller to consistently perform according to its specification. Availability investigates the source to obtain the MCU and whether it is available (in-stock) or not. Serviceability entails if there is good Manufacturer support for the Microcontroller purchased.

Maintainability: This sees that the system can be modified or kept functioning after its initial release even by a different designer.

Range of complementary Hardware: Some microcontroller have a lot of existing supported ICs than others, depending on the application a wide range of supported complementary hardware improves the maintainability of the system. In even of failure provides a wide range of available supported ICs.

Environmental Constraints: Some microcontrollers are better used in some certain applications for example in Wireless Sensor Networks (WSN) applications were sensors are used in harsh weather conditions or Climates, the tolerability of Microcontroller used is also considered.

Ease of Use: This is an important factor in determining the choice of microcontroller to use for a given application as time plays a key role in designing a system. The ease of use affects the development, implementation and testing time.

Safety: This checks the probability that the system will not cause any form of harm either directly or indirectly.

One important determining factor not mention in the above selection criteria is the demand of the type of application. Digital signal processing requires MCUs that are capable of handling applications with a higher computational requirement. Image processing requires microprocessors with features such as high computational capability and Larger RAM size compared to microprocessor of smaller RAM Size. The table 2.4.2 is the list of some possible development platform that can be utilized in digital signal processing.

The PIC MCU cost the least, but the price stated on the Table 2.4.1 is just for the processor, this price would however be increased if additional development components are to be utilized, the speed is also not sufficient especially when handling video signals.

The Gumstix has a variety of embedded platforms capable of handling digital images, also has up to 512 MB of RAM which is quite similar to the Raspberry PI. Gumstix is expensive when compared to the Raspberry PI although some packages include an LCD screen Memory Card and depending on the type of packs could include network cables. It also has wireless communication features as well.

Table 2.4.1 Illustrating features of some development platform that can be utilized for digital signal processing.

Features

PIC MCU

dsP30F614A

(Microchip 2012)

GUMSTIX

Gum3703FE

(Gumstix 2012)

Raspberry Pi(RPI)

Model B

(Raspberry Pi 2012)

Spartan-6

Xilinx FPGA

Cost

$ 7.25

>$ 200.0

$ 25

>$ 199

MIPS/Processor Speed

30 MIPS

2000 MIPS/800MHz

Some packs are up to 1GHz

875 MIPS

700 MHz

Can operate up to 390MHz

Development features for Digital Image processing

Requires the addition of external memory to handle Large data. Available features are inadequate without modification

Has sufficient feature requirements such as 512 MB RAM also has additional feature such as wireless communication features and Camera connectors

Has up to 512 MB RAM, Has the potential for imbibing wireless devices, HDMI monitor, Keyboard and Mouse.

Block RAMS are fundamentally 18 KB in size for each

UART features

Has 2 UART features

Has UART features

Has UART features

Has UART features

The FPGA’s have a different mode of operation, they contain logic blocks which are programmable logic components and also an order of reconfigurable interconnect that allow the blocks to be wired together, they are expensive when compared to the Raspberry Pi.

The RPI is fairly cheap and is easy to interface with a variety of hardware components such as keyboard, Mouse, Visual Displays by just plugging in directly and this gives the developer room to conveniently develop an embedded application. Its simplicity is attractive as this can save time and cost yet achieving a high quality result in developing an embedded application.

3



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now