Radial Basis Neural Network Based Intelligent Aerial

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract- In this paper, we proposed an automatic vehicle detection system for aerial surveillance that does not assume any prior information of camera heights, vehicle sizes, and vehicle colors. In this system, region-based classification, is not used which would highly depend on computational intensive color segmentation algorithms. And it is performed by a pixelwise classification method for vehicle detection. Small number of positive and negative samples are used. To increase the accuracy extract the features like edge, edge-direction, Corner, texture and color features. For vehicle color extraction, utilize a color transform and color classification models that is UV color models. For edge detection, apply moment preserving thresholds of the Canny edge detector automatically. And it is increases the adaptability and the accuracy for detection in various aerial images. Then applying Harris corner detector to detect corners. The number of frames required to train the RBFNN (Radial Basis Function Neural network) is very small. The Advantage of the proposed system framework is High classification accuracy, Computational Complexity is low, Accuracy is independent of illumination changes. Overall, the entire framework does not require a large amount of training samples.

Index Terms - Aerial surveillance, the RBFNN (Radial Basis Function Neural network),vehicle detection.

INTRODUCTION

Aerial surveillance has a long history in the military for observing enemy activities and in the commercial world for monitoring resources such as forests and crops. Similar imaging techniques are used in aerial news gathering and search and rescue. Surveillance is the monitoring of the behavior, activities, or other changing information, usually of people for the purpose of influencing, managing, directing, or protecting. Surveillance is therefore an ambiguous practice, sometimes creating positive effects, at other times negative. It is sometimes done in a surreptitious manner. It most usually refers to observation of individuals or groups by government organizations. Surveillance cameras are video cameras used for the purpose of observing an area. They are often connected to a recording device, IP network, and/or watched by a security guard/law enforcement officer. Cameras and recording equipment used to be relatively expensive and required human personnel to monitor camera footage. The use of surveillance cameras by governments and businesses has dramatically increased over

Anishmija S.L., Computer Science and Engineering, Vins Christian College of Engineering, Kanyakumari, India.

E-mail: [email protected]

Ms.Jemimah Simon., Computer Science and Engineering, Vins Christian College of Engineering, Kanyakumari, India.

E-mail: [email protected]

the last 10 years. Aerial surveillance is the gathering of surveillance, usually visual imagery or video, from an airborne vehicle—such as a unmanned aerial vehicle, helicopter, or spy plane. Military surveillance aircraft use a range of sensors (e.g. radar) to monitor the battlefield.

D:\phase-I\project\mija\Surveillance - Wikipedia, the free encyclopedia_files\220px-MicroAirVehicle.jpg

Fig:1 airbone camera

Traffic surveillance has been important to the Department of

Transportation. Several options were considered for the necessity of developing a system that would enable the next generation of traffic surveillance, but unfortunately, most of them were not practical.The aerial surveillance technologies are used in variety of applications, like military department, police department traffic management and disaster management systems. Compared with other video surveillance technologies, such as fixed camera video surveillance and ground surveillance, aerial surveillance is easier and quicker to deploy, more suitable for monitoring fast moving targets and covers a larger spatial area.

An automatic vehicle detection system for aerial surveillance that does not assume any prior information under camera heights, vehicle color and vehicle size. The challenges of vehicle detection in aerial surveillance include camera motions such as panning, tilting and rotation. Here the panning refers to the rotation of horizontal plane of video camera. And the tilting is refers to the rotation of vertical plane.

II. PAST WORK

In Suman srinivasan, Haniph Latchman [4], considered term of Software and it is critical to design an airborne moving vehicle detection method with high detection rate and low false positive rate, while performing in real time. However, such an airborne urban moving vehicle detection method is difficult to design. The most difficult issue is the camera vibration.

In Anand C.Shastry [5], proposed an airborne video registration method and sharply reduce the camera vibration. Nevertheless, the detection rate of the system based on registered image can only reach 65%. Additionally, there are some additional difficulties existing in urban traffic surveillance from an airborne platform, which make the traditional detection method in highway situation not achieve good performance. These difficulties include as follows:

(1) there are too many moving vehicles in a frame, so two adjacent vehicles may easily be regarded as one by image subtraction method. For example, Coifman0 proposed a simple but efficient subtraction method for roadway traffic monitoring from an unmanned aerial vehicle. However, this method can only fit for the situation with little traffic on the road.

(2) Due to illumination variance and complicated objects besides road, detection background is always complex and optical flow algorithm cannot meet the real time application. However, complicated urban traffic background leads to substantial computation time. Also, it brings false positive rate to image subtraction method, because of plenty of noise.

(3) Thermal noise is very serious and the algorithm based on thermal image processing cannot achieve good performance. For example, E.Michaelsen proposed a three-level-classification method based on thermal image and got good performance on highway, but it cannot perform as well on urban situation. Due to the difficulties mentioned above, airborne urban moving vehicle detection still needs further research.

S.Hinz and A.Baumgartner [7] introduces on automatic vehicle detection in monocular large scale aerial images. The extraction is based on a hierarchical model that describes the large vehicle features on different levels. The object model comprises the contextual knowledge, i,e., relation between a vehicle and other objects. For example, The pavement beside a vehicle and the sun causing a vehicle’s shadow projection. In order to avoid time consuming grouping algorithms in the early stages of extraction. Then, first focus on generic image features as edges, lines, and surfaces.

The extraction strategy is derived from the vehicle model and, consequently, contains the two paradigms. They are A) Coarse to fine B) Hypothesize and test. It consists of following four steps. They are 1) Creation of Region of Interest (RoIs) 2) Hypotheses formation 3) Hypotheses validation and selection 4) 3D model generation and verification. The drawback of this system is that there is no specific vehicle models assumed, making the method flexible. However, their system would miss vehicles when the contrast is weak or when the influences of neighboring objects are present.

H.Cheng and D.Butler [8], proposed an aerial video surveillance has proved to be an effective way to collect information for a variety of applications including military operations, law-enforcement activities, disaster management and commercial applications. In this paper, propose a video segmentation algorithm for aerial surveillance video. The algorithm uses a Mixture of Experts (MoE) consisting of a supervised image segmentation algorithm named the Trainable Sequential MAP (TSMAP) segmentation algorithm. Compared with other video surveillance technologies, such as fixed camera video surveillance and ground surveillance, aerial surveillance is easier and quicker to deploy, more suitable for monitoring fast moving targets and covers a much larger spatial area.

In this system Video segmentation can be crudely classified into three categories: They are (1) Supervised video segmentation, (2) Unsupervised video segmentation, (3) Specialized video segmentation

In Supervised video segmentation partitions a video frame into non-overlapping regions according to what it has been taught. During training, sample images or videos with corresponding ideal segmentations, also called ground-truth segmentations, are presented to a supervised segmentation algorithm. In the unsupervised segmentation algorithm provides accurate region boundaries. The specialized video segmentation, such as moving object detection.

R.Lin, X.Cao, Y.Xuc [11], proposed an Urban traffic surveillance, which is designed to improve traffic management, is an important part of intelligent traffic system (ITS). In particular, airborne moving vehicle detection has become a new but hot research area since its wide view and low cost. However, airborne urban traffic surveillance is impacted by many difficulties such as camera vibration, vehicle congestion, background variance, serious thermal noise etc. Therefore, image subtraction and thermal image processing have low detection rate, while the optical flow method cannot meet the real-time application. In this paper, propose a coarse-to-fine method, which can be divided into two stages. They are (1) pre-processing (2) classification inspection.

In pre-processing stage, the candidates regions of moving vehicle are obtained by employing Road Detection, Removal of Non-vehicle Regions and Moving Regions Extraction. The speed of this stage is fast but there is still relatively high false-positive-rate. In classification inspection stage, a well-trained cascade classifier, which refines the candidate regions, is designed to maintain a higher detection rate and a lower false alarm rate. The Pre-processing Procedures are (i) Translate the two consecutive frames which were captured in an aerial video. (ii) Use road detection to extract the non-road regions in the registered frames. (iii) Extract the possible vehicle regions from the non-road regions according to the vehicle size, because the non-road region includes vehicles and other objects such as buildings around the road.

The main disadvantage of this method is that there are a lot of miss detections on rotated vehicles. Such results are not surprising from the experiences of face detection using cascade classifiers. If only frontal faces are trained, then faces with poses are easily missed. However, if faces with poses are added as positive samples, the number of false alarms would surge.

III. PROPOSED WORK

Here, the proposed vehicle detection framework is mainly performed the training phase and the detection phase. In this paper, we design a new vehicle detection framework

that preserves the advantages of the existing works and avoids their drawbacks. The modules of the proposed system framework are illustrated in fig.2. In the detection phase, first read the video signal. Then converting the image frame format. Afterward, performing the background color removal operations. And it is based on the color histogram. If the histogram is high, it is background color. So it is removed.

Detection phase

Training phase

Video

Image frames

Feature Extraction

Back ground color removal

1. Local feature analysis

Corner Detection

Edge detection

Feature extraction

2. Color feature analysis

Color classifier

Color Transform

Classifi cation

Radial Basis Networks

Post processing

Vechicle detection resutls

Fig : 2 proposed system frame work

Afterward, if the histogram is low. It is foreground color that is, vehicle region. Then performed the feature extraction process. The same feature extraction process is performed in both the training phase and the detection phase. In the training phase, we extract multiple features including local edge and corner features, as well as vehicle colors to train a Radial Basis Neural Networks. Then Dynamic Bayesian classifier is used. Then finally get the vehicle detection results.

Here, we elaborate each module of the proposed system

framework in detail.

A)Background Color Removal

It is used to remove the nonvehicle regions of the entire scene in aerial images. Take fig 3: Here we construct the color histogram of each frame and remove the colors of the entire scene. Here, the colors are quantized into 48 histogram bins. Among all histogram bins, the 12th, 21st, and 6th bins are the highest and are thus regarded as background colors and removed. These removed pixels do not need to be considered in subsequent detection processes. Performing background color removal cannot only reduce false alarms but also speed up the detection process.

Fig. 3. Color histogram of a frame.

Here, the below fig 4: is mainly explained the background removal results.

(a) Orginal image (b) Background color Removal

B) Feature Extraction

Feature extraction is performed in both the training phase and the detection phase.

(1) Local Feature Analysis:

Local features is mainly contain images are subdivided into small small parts. Here, the Corners and edges are usually located in pixels with more information. We use the Harris corner detector to detect corners. To detect edges, we apply moment-preserving thresholding method on the classical Canny edge detector to select thresholds adaptively according to different scenes. In the Canny edge detector, there are two important thresholds, i.e., the lower threshold and the higher threshold .

=, i=1,2,3,………….

Where is the total number of pixels in image with gray value and.As the illumination in every aerial image differs, the desired thresholds vary and adaptive thresholds are required. The movement is calculated by pixel position is multiplied by pixel value. ie,

Movement =Pixel position * Pixel value

2) Color transform and color classification:

The new color model to separate vehicle colors from non vehicle colors effectively. This color model transforms color components into the color domain (u,v) .

Where(Rp,Gp,Bp) is the R,G,B color components of pixel p and Zp=(Rp+Gp+Bp)/3.

As shown in Fig. 5, we can observe that vehicle colors and nonvehicle colors have less overlapping regions under the (U,V) color model. Therefore, we apply the color transform to obtain (U,V) components first and then use a support vector machine (SVM) to classify vehicle colors and nonvehicle colors.

Fig. 5. Vehicle colors and nonvehicle colors in different color spaces. (a) U-V, b) R-G, (c) G-B , and (d) B-R planes.

C) Dynamic Bayesian Network

Dynamic Bayesian network (DBN) is constructed for the classification purpose. It is based on Bays theorem. DBN is the most accurate Comparison with each classifier. And it is classified into two types. They are 1) Posterior 2) Priorior. First convert regional local features into quantitative observations that can be referenced when applying pixel wise classification via DBN. In the training phase, extract multiple features including local edge and corner features, as well as vehicle colors to train a dynamic Bayesian network (DBN). In the detection phase, first perform background color removal similar to the process. Afterward, the same feature extraction procedure is performed as in the training phase. The extracted features serve as the evidence to infer the unknown state of the trained DBN, which indicates whether a pixel belongs to a vehicle or not. We perform pixelwise classification for vehicle detection using DBNs. The design of the DBN model is illustrated in Fig. 6. The state of vt is dependent on the state of vt-1.Moreover, at each time slice, state has influences on then observation nodes. The observation are assumed to be independent of one another.

Vt

Zt

St

Zt

At

Ct

Et

Fig :6 DBN model for pixelwise classification

The first feature S denotes the percentage of pixels in that are classified as vehicle colors by SVM, as defined Note that denotes to the number of pixels in that are classified as vehicle colors by SVM, i.e.,

S=NVechile Color/N*N

Feature C denotes to the number of pixels in that are detected as corners by the Harris corner detector.

C=NCorner/N*N

The feature E denotes the number of pixels in that are detected as edges by the enhanced Canny edge detector. The pixels that are classified as vehicle colors are labeled as connected vehicle color regions.

E=NEdge/N*N

The last two features A and Z are defined as the aspect ratio and the size of the connected vehicle-color region .More specifically, and feature Z is the pixel count of particular vehicle color region

A=Length/Width

In the detection phase, the Bayesian rule is used to obtain the probability that a pixel belongs to a vehicle, i.e.,

P(Vt/St, Ct, At, Zt, Vt-1) = P(Vt/St) P(Vt/Ct)P(Vt/At) P(Vt/Zt) P(Vt/Vt-1) P(Vt-1)

P(Vt|St) is defined as the probability that a pixel belongs to a vehicle pixel at time slice given observation St at time Instance t. According to the naive Bayesian rule of conditional probability, the desired joint probability can be factorized since all the observations are assumed to be independent. The proposed vehicle detection framework can also utilize a Bayesian network (BN) to classify a pixel as a vehicle or non-vehicle pixel.

D) Post Processing

Use morphological operations to enhance the detection mask and perform connected component labeling to get the vehicle objects. Here, the post processing mainly used in two morphological operations. They are morphological open and morphological close. Afterward, Opening removes small objects from the foreground (Usually taken as the dark pixels)of an image, placing them in the background, while closing removes small holes in the foreground, changing small background into foreground. These techniques can also be used to find specific shapes in an image. Opening can be used to find things into which a specific structuring element can fit (edge and corner). The size and the aspect ratio constraints are applied again after morphological operations in the post processing stage to eliminate objects that are impossible to be vehicles.

IV. EXPERIMENTAL RESULTS

Experimental results are demonstrated here. To analyze the performance of the proposed system, various video sequences with different scenes and different filming altitudes are used. When performing background color removal, we quantize the color histogram bins as 16 *16 *16. Colors corresponding to the first eight highest bins are regarded as background colors and removed from the scene.

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\input_img.jpg

Fig: 1 Input Image

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\color_feature_U.jpg

Fig:2 Color feature extraction U

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\color_feature_V.jpg

Fig: Color feature extraction V

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\edge_detection.jpg

Fig :3Canny Edge detection

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\harris Corner Detection.jpg

Fig :4 Harris Corner detection

D:\phase-I\final rew\anishmija_Vehicle Detection\anishmija_result\output_img.jpg

Fig : 5Final Result (using DBN)

We compare different vehicle detection methods in Fig. 6.The moving-vehicle detection with road detection method requires setting a lot of parameters to enforce the size constraints in order to reduce false alarms. However, for the experimental data set, it is very difficult to select one set of parameters that suits all videos. Setting the parameters heuristically for the data set would result in low hit rate and high false positive numbers. The cascade classifiers used in need to be trained by a large number of positive and negative training samples. The number of training samples required in is much larger than the training samples used to train the SVM classifier. The colors of the vehicles would not dramatically change due to the influence of the camera angles and heights. However, the entire appearance of the vehicle templates would vary a lot under different heights and camera angles. When training the cascade classifiers, the large variance in the appearance of the positive templates would decrease the hit rate and increase the number of false positives. Moreover, if the aspect ratio of the multiscale detection windows is fixed, large and rotated vehicles would be often missed. The symmetric property method proposed is prone to false detections such as symmetrical details of buildings or road markings. Moreover, the shape descriptor used to verify the shape of the candidates is obtained from a fixed vehicle model and is therefore not flexible. Moreover, in some of our experimental data, the vehicles are not completely symmetric due to the angle of the camera. Therefore, the method is not able to yield satisfactory results.

Fig : 6 Comparisons of different vehicle detection methods.

Compared with these methods, the proposed vehicle detection framework does not depend on strict vehicle size or aspect ratio constraints. Instead, these constraints are observations that can be learned by BN or DBN. The training process does not require a large amount of training samples. The results demonstrate flexibility and good generalization ability on a wide variety of aerial surveillance scenes under different heights and camera angles. It can be expected that the performance of DBN is better than that of the BN. The colored pixels are the ones that are classified as vehicle pixels by BN or DBN. The ellipses are the final vehicle detection results after performing post processing. DBN outperforms BN because it includes information along time. When observing detection results of consecutive frames, we also notice that the detection results via DBN are more stable. The reason is that, in aerial surveillance, the aircraft carrying the camera usually follows the vehicles on the ground, and therefore, the positions of the vehicles would not have dramatic changes in the scene even when the vehicles are moving in high speeds. Therefore, the information along the time contributed by P(Vt/Vt-1) helps stabilize the detection results in the DBN.

V. CONCLUSION

In this paper, we proposed the system framework does not assume any prior information of camera heights, vehicle sizes, and aspect ratios. In this design contains a new vehicle detection framework that preserves the advantages of the existing works and avoids their drawbacks. Here, it present an automatic vehicle detection system for aerial surveillance. In this system, escape from the stereotype and existing frameworks of vehicle detection in aerial surveillance. Then design a pixel wise classification method for vehicle detection. The novelty lies in the fact that, in spite of performing pixel wise classification, relations among neighboring pixels in a region are preserved in the feature extraction process. Features including vehicle colors and local features are considered. In this system, region-based classification is not used which would highly depend on computational intensive color segmentation algorithms such as mean shift. The experimental results demonstrate flexibility and good generalization abilities of the proposed method on a challenging data set with aerial surveillance images taken at different heights and under different camera angle.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now