Mobile Ad Hoc Networks And Wireless Sensor Networks

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

A wireless sensor node (WSN) is a one type of sensor technology to monitor physical or conservational needs, such as pressure, sound, vibration, temperature, motion and to transmit the data to a sink (base station) through the network. Currently most of the latest networks are bi-directional, enabling to cope up with the activity of the sensors. Military applications like battle field reconnaissance is the main inspiration for wireless sensor networks development, recently this type of distributed networks a read opted in most of remote monitoring applications and industrial measurements application like machine condition monitoring, industrial process monitoring, structural health monitoring, and indoor monitoring.Sensors nodes are characteristically proficient of wireless communication and are considerably obliged in the amount of existing resources such asenergy (power), storage (memory) and computation.These obligate make the deployment and operation of WSN significantly distinct from existing wireless networks, and demand the development of resource aware protocols and supervision techniques.

Wireless sensor networks (WSNs) have become a hot research topic in recent years Clustering is considered as an effective approach to reduce network overhead and improve scalability. Wireless sensor network is one of the pervasive networks which sense our environment through various parameters like heat, temperature, pressure, etc…[1]Since sensor networks are based on the dense deployment of disposable and low-cost sensor nodes, destruction of some nodes by hostile action does not affect a military operation as much as the destruction of a traditional sensor, which makes the sensor network concept a better approach for battlefields. [2]. The transmission between the two nodes will minimize the other nodes to show the improve throughput and greater than spatial reuse than wireless networks to lack the power controls. Adaptive Transmission Power technique to improve the Network Life Time in Wireless Sensor Networks using graph theory.

Once the clustering procedure , each node in the network is associated with a cluster head. Two clusters in a neighbor node have high enough contact probability (≥γ ), a pair of gateway nodes are identified to bridge them. Consider Node i, which intends to send a data message to Node j. Node i looks up its cluster table to find the cluster ID of Node j, i.e., Ωij [4] . According to Ωij , three types are considered: intra and inter cluster routing, one-hop and also multihop inter-cluster routing.

The Clustering Technique using the minimum spanning tree[MST] to detect the shortest path in wireless sensor networks. The data from near by the cluster heads will be directly transmitted to the sink node. The data from sink nodes to calculate the distance whereas the cluster head will be transmitted through the shortest multihop path[5].The distance between the cluster head and sink node. The shortest path between each cluster head to the sink node. To find the Predominant node[Maximum number of path].Transmission power techniques is to improve the performance of the network in several aspects. Transmission range in the wireless networks should be change the ranges in each link. The traffic capacity decreases when more nodes are added to increases the interference[6].Routing graph theory to multiple paths from data sources to a neighbors node. A Novel approach Adaptive state based clustering, which demonstrate the directed acyclic graphs from each node to gateways between any given cluster head[7]. We have the local level distance from the edge from the nearest connected to the neighboring nodes[8]. We have two approaches the transmission power to improve the network life time in wireless sensor networks.

• Tree based approach

• Clustering based approach

In the cluster-based approach sensor nodes in particular WSN are permitted to transmit sensed data towards the base station. In this allows sensor nodes to sense and transmit the sensed information to the cluster-heads directly, instead of routing through its immediate neighbors. When a cluster node fails because of energy depletion we need to choose alternative cluster for that particular region. In periodical time each sensor node in the cluster should possess the next cluster head re-election based on energy to avoid node failure. Unlike previous algorithms, cluster formation precedes before cluster head selection. This is Graph Theory based on Minimum Spanning Tree (MST) concept. The spanning tree is ‘minimal’ to the cluster of each node when the total length of the edges is the minimum necessary to connect all the vertices in the clustering head. But in our proposed algorithm MST is used in the initial source node formation phase and each cluster head formation phase[Figure 1]..

Cluster Head Selection:

In the newly formed clusters, the each node with the highest energy level is selected as the cluster head and the next higher energy level node is selected as the next CH node.

ClusterHead

Source Node

Neighbour Node

Figure 1 : Network Structure

The proposed system of clustering approach using Transmission power technique is based on Graph theory to enhance the lifetime of the entire sensor network. The eligible sensor nodes are chosen depending on their power levels and association with number of nodes in transmission area. The efficiency of the proposed model is experimented and evaluated in Matlab and the results accomplished showed that in this technique, sensor nodes utilize extremely less power and stay in the network for a greater period of time

DTN is fundamentally an opportunistic communication system, where communication links only exist temporarily, rendering it impossible to establish end-to-end connections for data delivery. In such networks, routing is largely based on nodal contact probabilities . The key design issue is how to efficiently maintain, update, and utilize such probabilities. Clustering is considered as an effective approach to reduce network overhead and improve scalability. Various clustering algorithms have been investigated in the context of mobile ad hoc networks. However, none of them can be applied directly to DTN, because they are designed for well-connected networks and require timely information sharing among nodes.A node in real-life tends to visit some locations more frequently than others. If two nodes share the same home location, they have high chance to meet each other. Thus real-life mobility patterns naturally group mobile devices into clusters.

In this work, we investigate distributed clustering of certain fixed size and cluster-based routing protocols for Delay-Tolerant Mobile Networks (DTMNs). The basic idea is to autonomously learn unknown and possibly random mobility parameters and to group mobile nodes with similar mobility pattern into the same cluster. The nodes in a cluster can then interchangeably share their resources for overhead reduction and load balancing, aiming to achieve efficient and scalable routing in DTMN. Due to the lack of continuous communications, it becomes challenging to acquire necessary information to form clusters and ensure their convergence and stability. In this protocol, an exponentially weighted moving average (EWMA) scheme is employed for on-line updating nodal contact probability, with its mean proven to converge to the true contact probability. Subsequently, a set of functions including Sync(), Leave(), and Join() are devised to form clusters and select gateway nodes based on nodal contact probabilities. The clusters formed has a maximum size, if the number of nodes exceed the maximum limit then it forms a new cluster.

This can improve the memory usage in the members of the clusters and it takes minimum time for the nodes in order to find the routing table. Finally, the gateway nodes exchange network information and perform routing.When the gateway nodes execute Leave() function they send a bye message to all nodes in the cluster, this enables the nodes to have consistent data in the gateway table.

Extensive simulations are being carried out to evaluate the efficiency of cluster-based routing. The expected results are to improve the lifetime of the node ,reduce the network traffic, lower overhead and end-to-end delay.

When the cluster size exceeds the maximum value, the nodes have to save the details about all the nodes in the cluster and it has to maintain gateway information for all the clusters. At the same time clusters have to determine the routing table for multihop inter cluster routing. Inorder to reduce the memory requirement of the nodes, the clusters are assumed to be of this maximum size. Computation of routing table will also be easier when this size has a limit. So we try to find an optimum size of the cluster, clusters should not be small as it will lead to extra works at the same time it should not be too big as it will increase the memory requirements and computation time.

Due to possible errors in the estimation of contact probabilities and unpredictable sequence of the meetings among mobile nodes, many unexpected small size clusters may be formed. To deal with this problem, we employ a merging process that allows a node to join a "better" cluster, where the node has a higher stability as to be discussed in the next section. The merging process is effective to avoid fractional clusters.

Load balancing is an effective enhancement to the proposed routing protocol. The basic idea is to share traffic load among cluster members in order to reduce the dropping probability due to queue overflow at some nodes. Sharing traffic inside a cluster is reasonable, because nodes in the same cluster have similar mobility pattern, and thus similar ability to deliver data messages. Whenever the queue length of a node exceeds a threshold, denoted by Λ, it starts to perform load balancing. More specifically, it randomly transmits as many messages as possible to any node it meets, until their queues are equally long or the latter’s queue becomes longer than Λ.

In wireless networks, data packets find their paths through routers or in general gateways. Each time a packet is passed to the next router a "hop" occurs. The function of intermediate hops is to relay data from one hop to the next one. Therefore in a wireless network, single-hop means that there is only one hop between source station and destined host. Wireless stations are connected to wireless access points (WAPs) which connects to router via a wired network. In other words, host connects to base station (i.e. wireless access point such as WiFi, WiMAX, cellular) which connects to larger network (e.g. Internet).

In communication networks, throughput or network throughput is the average rate of successful message delivery over a communication channel. This data may be delivered over a physical or logical link, or pass through a certain network node. The throughput is usually measured in bits per second (bit/s or bps), and sometimes in data packets per second or data packets per time slot. The throughput can be analyzed mathematically by means of queuing theory, where the load in packets per time unit is denoted arrival rate λ, and the throughput in packets per time unit is denoted departure rate μ.

We start from the conventional protocols such as direct transmission protocol that sensor node sends data directly to a distant base station and consumes its energy rapidly. That leads to Minimum-transmission-energy (MTE) routing protocol that reduces distance for transmitting packet to BS by routing a data packet through multiple intermediate nodes Error: Reference source not found. Following by section 2, we briefly introduce two classical energy-efficient algorithms, which is LEACH Error: Reference source not found (low-Energy Adaptive Clustering Hierarchy) and PEGASIS Error: Reference source not found (Power-Efficient Gathering in Sensor Information Systems). LEACH introduces clustering based protocol, where sensor nodes are grouped in several clusters and have randomized rotation of cluster-heads that will transmit a data to BS. PEGASIS is a chain-based protocol built on top of idea from LEACH, which nodes communicate only to its neighbor and takes turn to be leader to send data back to the BS.

LEACH (Low-Energy Adaptive Clustering Hierarchy)

LEACH Error: Reference source not found is a cluster-based wireless sensor networking protocol. LEACH adapts the clustering concept to distribute the energy among the sensor nodes in the network. LEACH improves the energy-efficiency of wireless sensor networking beyond the normal clustering architecture. As a result, we can extend the life time of our network, and this is the very important issue that is considered in the wireless sensor networking field.

In LEACH protocol, wireless sensor networking nodes divide themselves to be many local clusters. In each local cluster, there is one node that acts as the base station (or we can call it "cluster-head"). Hence, every node in that local cluster will send the data to the cluster-head in each local cluster. The important technique that makes LEACH be different from the normal cluster architecture (the drain the nodes battery very quickly) is that LEACH uses the randomize technique to select the cluster-head depending on the energy left of the node.

After cluster-head is selected with some probability, the cluster-heads in each local cluster will broadcast their status to the sensor nodes in their local range by using CSMA MAC protocol. Each sensor node will choose a cluster-head that is closest to itself to join that cluster because each sensor node will try to spend the minimum communication energy with it cluster head.

After the clustering phase is set up, each cluster-head will make a schedule for the nodes in its cluster. In paper LEACH, TDMA is used. For more efficiency, each sensor node could turn-off waiting for their allocated transmission.

Cluster-heads will collect the data from the nodes in its cluster, and compresses that data before transmits the data to the base station. By following this protocol, the base station will get the data from all sensor nodes that we are interested, and ready for the end-user to access the data.

This is improved version from LEACH. Although LEACH balances the energy cost, by clustering, sensor still needs relative large energy to transmit data to its cluster head. The main idea of PEGASIS is that nodes are formed into a chain where each node receive from and transmit to closest neighbor only. The distance between sender and receiver is reduced as well as decreasing the amount of transmission energy. To construct a chain, PEGASIS Error: Reference source not found uses a greedy algorithm that starts from the farthest node from the base station.

Figure 3. Chain is constructed using the greedy algorithm

In Figure.3, the algorithm starts with node 0 that connects to node 3. Then, node3 connects to node 1 and node 1 connects to node 2, which is the closest one to the base station. Because nodes already in the chain cannot be revisited, the neighbor distance will increase gradually. When a node dies due to its battery, the chain will be reconstructed by repeating the same procedure and bypass the dead node.

In one round of transmission, a randomized node is appointed to be the leader to transmit data to BS. If the BS locates outside the range of this node, multi-hop transmission will be employed. The leader will be changed randomly in every round, so that overall energy dissipation is balanced out.

For transmitting a packet in each round, a token is used that passing from the one end of the chain to the other end of the chain. Only node that has a token can transmit a data packet to its intermediate node in the chain. When intermediate node receives data from one neighbor along with a token, it fuses the data packet with its own data and transmits a new data packet to the next node in the chain.

Figure 4. Token passing approach in PEGASIS

In figure 4, C0 will pass its data and token to C1. C1 fuses a data packet with its own data and pass a new data packet to the leader, C2. C2 does not transmit a data packet to BS yet, but rather it passes a token to C4. When C2 receives a data from C4 and C3, it fuses and transmits the sensed data to BS.

From the simulation results, it makes sense that direct scheme has worst performance because all sensor consume more energy to transmit data directly to base station, resulting in shorter network lifetime whereas LEACH utilizes the advantage of clustering, only a few cluster-heads take the responsibility to send data and every sensor takes turn to be the cluster-head, causing the energy consumption distribute to other sensors so that higher network lifetime can be achieved. However, PEGASIS outperforms LEACH in three ways. First, the distance between neighbors in a chain is much shorter than the distance between a node in a cluster and its head, so each sensor won’t take that much energy. Furthermore, only one node transmits a data packet to BS per transmission round instead of several cluster heads in LEACH. Finally, the amount of data that the leader will receive in PEGASIS is two rather than from all cluster nodes in LEACH. As we can see in Figure 5, it is approximately almost 3 times better than LEACH.

Similarly, their corresponding performance on coverage ratio show reasonable results. If nodes drain battery very quickly, of course, the coverage can not be efficiently provided. However, it is important to mention here that a network with longer lifetime (higher rounds) does not guarantee a better coverage, and although coverage ratio is related to node death percentage, a good energy-balanced scheme, which well distributes energy consumption among sensors and may lead all sensor die at about the same time, the value it brings is that a good coverage has been provided long enough before they are almost all dead simultaneously. That’s why we include the "coverage ratio" metric in the simulation in addition to number of rounds.

Categories of Sensor Nodes:

(i) Passive, Omni Directional Sensors: passive sensor nodes sense the environment without manipulating it by active probing. In this case, the energy is needed only to amplify their analog signals. There is no notion of "direction" in measuring the environment.

(ii) Passive, narrow-beam sensors: these sensors are passive and they are concerned about the direction when sensing the environment.

(iii) Active Sensors: these sensors actively probe the environment.

Since a sensor node has limited sensing and computation capacities, communication performance and power, a large number of sensor devices are distributed over an area of interest for collecting information (temperature, humidity, motion detection, etc.). These nodes can communicate with each other for sending or getting information either directly or through other intermediate nodes and thus form a network, so each node in a sensor network acts as a router inside the network. In direct communication routing protocols (single hop), each sensor node communicates directly with a control center called Base Station (BS) and sends gathered information. The base station is fixed and located far away from the sensors. Base station(s) can communicate with the end user either directly or through some existing wired network. The topology of the sensor network changes very frequently. Nodes may not have global identification. Since the distance between the sensor nodes and base station in case of direct communication is large, they consume energy quickly.

In another approach (multi hop), data is routed via intermediate nodes to the base station and thus saves sending node energy. A routing protocol is a protocol that specifies how routers (sensor nodes) communicate with each other, disseminating information that enables them to select routes between any two nodes on the network, the choice of the route being done by routing algorithms. Each router has a priori knowledge only of the networks attached to it directly. A routing protocol shares this information first among immediate neighbors, and then throughout the network. This way, routers gain knowledge of the topology of the network. There are mainly two types of routing process: one is static routing and the other is dynamic routing.

Dynamic routing performs the same function as static routing except it is more robust. Static routing allows routing tables in specific routers to be set up in a static manner so network routes for packets are set. If a router on the route goes down, the destination may become unreachable. Dynamic routing allows routing tables in routers to change as the possible routes change. In case of wireless sensor networks dynamic routing is employed because nodes may frequently change their position and die at any moment.

Figure 1.2 : A Wireless Sensor Network structure .

There are four Basic Components in a Sensor Networks:

An Assembly of distributed or localized sensors

An Interconnecting network ( usually, but not always, wireless based)

A Central point of Information Clustering

A set of Computing resources at the central point (or beyond) to handle data correlation, event trending, status querying and data mining.

Transmitted Power

Wireless sensor networks (WSNs) provide a new class of computer systems and expand human ability to remotely interact with the physical world. Most of the sensors used so far are point sensors which have disc-shaped sensing and communication areas. Energy-efficient communication is discussed in WSNs. Saving energy is very important in WSNs because of the limited power supply of sensors and the inconvenience to recharge their batteries. Methods are proposed to reduce communication energy by minimizing the total sensor transmission power. That is, instead of transmitting using the maximum possible power, sensors can collaboratively determine and adjust their transmission power to reach minimum total transmission power and define the topology of the WSN by the neighbor relation under certain criteria. This is in contrast to the "traditional" network in which each node transmits using its maximum transmission power and the topology is built implicitly without considering the power issue. Choosing the right transmission power critically affects the system performance in several ways. First, it affects network spatial reuse and hence the traffic carrying capacity. Choosing too large a power level results in excessive interference, while choosing too small a power level results in a disconnected network. Second, it impacts on the contention for the medium. Collisions can be mitigated as much as possible by choosing the smallest transmission power subject to maintaining network connectivity. The goal is to find distributed methods to let each sensor decide its transmission power by communicating with other sensors to minimize total sensor transmission power while maintaining the connectivity of the network. It is pointed out that it can maintain the network connectivity, but may not minimize the total sensor transmission power. Then it is enhanced to DTCYC algorithm, where the basic idea is to let each sensor remove the largest edge in every cycle involving it as a vertex. Mathematical proofs show that it can not only maintain the network connectivity but also minimize the total transmission power.

Figure 1.3: Transmitted power in WSN

The source node presented here is indicated as ‘s’. It has to reach its destination ‘T’ node by shortest distance calculation and power should be saved. Figure 1.3 shows a sample WSN with arbitrary aspired transmitted power.

Power Efficiency in WSNs is generally accomplished in three ways:

Low duty cycle operation

Local/In network processing to reduce data volume ( and hence transmission time)

Multihop networking reduces the requirement for long range transmission since signal path loss is an inverse exponent with range of distance. Each node in the sensor network can act as a repeater, thereby reducing the link range coverage required and in turn the transmission power.

The advantages and disadvantages of WSNs can be summarized as follows:

1.3 Advantages:

Network setups can be done without fixed infrastructure.

Ideal for the non-reachable places such as across the sea, mountains, rural areas or deep forests.

Flexible if there is ad hoc situation when additional workstation is required.

Implementation cost is cheap.

1.4 Disadvantages:

Less secure because hackers can enter the access point and get all the information.

Lower speed compared to a wired network.

More complex to configure than a wired network.

Easily affected by surroundings (walls, microwave, large distances due to signal attenuation.) Overview of Sensor technology

Sensor Nodes are almost invariably constrained in energy supply and radio channel transmission bandwidth, these constraints, in conjunction with a typical deployment of large number of sensor nodes, have posed a plethora of challenges to the design and management of WSNs. Some of the key technologies and standards elements that are relevant to sensor networks are as follows:

Sensors

Intrinsic Functionality

Signal processing

Compression, forward error correction, encryption

Control/actuation

Clustering and in-network computation

Self assembly

Wireless Radio Technologies

Software defined radios

Transmission range

Transmission impairments

Modulation Techniques

Network Topologies

Standards

IEEE 802.1.1a/b/g together with ancillary security protocols

IEEE 802.15.1 PAN/Bluetooth

IEEE 802.15.3 Ultra wide band (UWB)

IEEE 802.15.4 ZIGBEE

IEEE 802.16 WIMAX

IEEE 1451.5 (Wireless Sensor Working Group)

Mobile IP

Software Applications

Operating Systems

Network Software

Direct database Connectivity software

Middleware software

Data Management Software

1.7 Commercial Generations of Sensor Networks

The Generation of Sensor Networks in 1st ,2nd and 3rd generations are shown in table 1.1. The various Parameters in each generation is clearly indicated.

Table 1.1: Generations of Sensor Networks

Parameters

First Generation

Second Generation

Third Generation

Size

Attach or Larger

Paper back book or smaller

Small, even a dust particle

Weight

Pounds

Ounces

Grams or less

Deployment Mode

Physically installed or air dropped

Hand-placed

Embedded or Sprinkled

Node Architecture

Integrated sensing, Processing and Communication

Integrated Sensing, Processing and Communication

Fully integrated sensing, processing and Communication

Protocols

Proprietary

Proprietary

Standard : Wi-Fi, ZigBee, WiMax, etc..

Topology

Point to Point, Star and Multihop

Client server and peer to peer

Fully peer to peer

Power Supply

Large batteries or line feed

AA batteries

Solar or possibly nanotechnology based

Life Span

Hours, days and longer

Days to weeks

Months to years

ISSUES OF WIRELESS SENSOR NETWORKS

2.1. Hardware and Operating System for WSN

Wireless sensor networks are composed of hundreds of thousands of tiny devices called nodes. A sensor node is often abbreviated as a node. A Sensor is a device which senses the information and passes the same on to a mote. Sensors are used to measure the changes to physical environment like pressure, humidity, sound, vibration and changes to the health of person like blood pressure, stress and heart beat. A Mote consists of processor, memory, battery, A/D converter for connecting to a sensor and a radio transmitter for forming an ad hoc network. A Mote and Sensor together form a Sensor Node. There can be different Sensors for different purposes mounted on a Mote. Motes are also sometimes referred to as Smart Dust. A Sensor Node forms a basic unit of the sensor network

The nodes used in sensor networks are small and have significant energy constraints. The hardware design issues of sensor nodes are quite different from other applications and they are as follow as:

Radio Range of nodes should be high (1-5 kilometers). Radio range is critical for ensuring network connectivity and data collection in a network as the environment being monitored may not have an installed infrastructure for communication. In many networks the nodes may not establish connection for many days or may go out of range after establishing connection.

Use of Memory Chips like flash memory is recommended for sensor networks as they are non-volatile, inexpensive and volatile.

Energy/Power Consumption of the sensing device should be minimized and sensor nodes should be energy efficient since their limited energy resource determines their lifetime. To conserve power the node should shut off the radio power supply when not in use. Battery type is important since it can affect the design of sensor nodes. Battery Protection Circuit to avoid overcharge or discharge problem can be added to the sensor nodes.

Sensor Networks consists of hundreds of thousands of nodes. It is preferred only if the node is cheap. There are various platforms that are developed considering the above discussed design issues like Mica2, MicaZ, Telos, BT Node and Imotes and MIT μAMPS (μ-Adaptive Multi-domain Power-aware Sensors) . Among them the Berkeley Motes, which is commercially made available by Crossbow Technologies is very much popular and is used by various research organizations.

2.2. Wireless Radio Communication Characteristics

Performance of wireless sensor networks depends on the quality of wireless communication. But wireless communication in sensor networks is known for its unpredictable nature. Main design issues for communication in WSNs are:

Low power consumption in sensor networks is needed to enable long operating lifetime by facilitating low duty cycle operation and local signal processing.

Distributed sensing effectively acts against various environmental obstacles and care should be taken that the signal strength, consequently the effective radio range is not reduced by various factors like reflection, scattering and dispersions.

Multihop networking may be adapted among sensor nodes to reduce the range of communication link.

Long range communication is typically point to point and requires high transmission power, with the danger of being eavesdropped. So, short range transmission should be considered to minimize the possibility of being eavesdropped.

Communication systems should include error control subsystems to detect errors and to correct them.

2.3. Medium Access Schemes

Communication is a major source of energy consumption in WSNs and MAC protocols directly control the radio of the nodes in the network. MAC protocols should be designed for regulating energy consumption, which in turn influences the lifetime of the network . The various design issues of the MAC protocols suitable for sensor network environment are:

The MAC layer provides fine-grained control of the transceiver and allows on and off switching of the radio. The design of the MAC protocol should have this switching mechanism to decide when and how frequently the on and off mechanism should be done. This helps in conserving energy.

A MAC protocol should avoid collisions from interfering nodes, over emitting, overhearing, control packet overhead and idle listening. When a receiver node receives more than one packet at the same time, these packets are called "collided packets", which need to be sent again thereby increasing energy consumption. When a destination node is not ready to receive messages then it is called overemitting. Overhearing occurs if a node picks up packets that were destined for some other node. Sending and receiving of less useful packets results in control overhead. Idle listening is an important factor as the nodes often hear the channel for possible reception of the data which is not sent.

Scalability, adaptability and decentralization are other important criteria in designing a MAC protocol. The sensor network should adapt to the changes in the network size, node density and topology. Also some nodes may die overtime, some may join and some nodes may move to different locations. A good MAC protocol should accommodate these changes to the network.

A MAC protocol should have minimum latency and high throughput when the sensor networks are deployed in critical applications.

A MAC protocol should include Message Passing. Message passing means dividing a long message into small fragments and transmit them in burst. Thus, a node which has more data gets more time to access the medium.

There should be uniformity in reporting the events by a MAC protocol. Since the nodes are deployed randomly, nodes from highly dense area may face high contention among themselves when reporting events resulting in high packet loss. Consequently the sink detects fewer events from such areas. Also the nodes which are nearer to the sink transmit more packets at the cost of nodes which are away from the sink.

The MAC protocols should take care of the well know problem of information asymmetry, which arises if a node is not aware of packet transmissions two hops away.

MAC Protocols should satisfy the real-time requirements. MAC being the base of the communication stack; timely detection, processing and delivery of the information from the deployed environment is an indispensable requirement in a WSN application. Some popular MAC Protocols are S-MAC (Sensor MAC),B-MAC, ZMAC, Time-MAC and Wise MAC

2.4. Deployment

Deployment means setting up an operational sensor network in a real world environment. Deployment of sensor network is a labor intensive and cumbersome activity as it does not have influence over the quality of wireless communication and also the real world puts strains on sensor nodes by interfering during communications. Sensor nodes can be deployed either by placing one after another in a sensor field or by dropping it from a plane. Various deployment issues which need to be taken care are:

When sensor nodes are deployed in real world, node death due to energy depletion either caused by normal battery discharge or due to short circuits is a common problem which may lead to wrong sensor readings. Also sink nodes acts as gateways and they store and forward the data collected. Hence, problems affecting sink nodes should be detected to minimize data loss.

Deployment of sensor networks results in network congestion due to many concurrent transmission attempts made by several sensor nodes. Concurrent transmission attempts occur due to inappropriate design of the MAC layer or by repeated network floods. Another issue is the physical length of a link. Two nodes may be very close to each other but still they may not be able to communicate due to physical interference in the real world while nodes which are far away may communicate with each other.

Low data yield is another common problem in real world deployment of sensor nodes. Low data yield means a network delivers insufficient amount of information.

Self Configuration of sensor networks without human intervention is needed due to random deployment of sensor nodes.

2.5 Localization

Sensor localization is a fundamental and crucial issue for network management and operation. In many of the real world scenarios, the sensors are deployed without knowing their positions in advance and also there is no supporting infrastructure available to locate and manage them once they are deployed. Determining the physical location of the sensors after they have been deployed is known as the problem of localization. Location discovery or localization algorithm for a sensor network should satisfy the following requirements :

The localization algorithm should be distributed since a centralized approach requires high computation selective nodes to estimate the position of nodes in the whole environment. This increases signaling bandwidth and also puts extra load on nodes close to center node.

Knowledge of the node location can be used to implement energy efficient message routing protocols in sensor networks.

Localization algorithms should be robust enough to localize the failures and loss of nodes. It should be tolerant to error in physical measurements.

That the precision of the localization increases with the number of beacons. A beacon is a node which is aware of its location. But the main problem with increased beacons is that they are more expensive than other sensor nodes and once the unknown stationary nodes have been localized using beacon nodes then the beacons become useless.

Techniques that depend on measuring the ranging information from signal strength and time of arrival require specialized hardware that is typically not available on sensor nodes.

Localization algorithm should be accurate, scalable and support mobility of nodes.

2.6 Synchronization

Clock synchronization is an important service in sensor networks. Time Synchronization in a sensor network aims to provide a common timescale for local clocks of nodes in the network. A global clock in a sensor system will help process and analyze the data correctly and predict future system behavior. Some applications that require global clock synchronization are environment monitoring, navigation guidance, vehicle tracking etc. A clock synchronization service for a sensor network has to meet challenges that are substantially different from those in infrastructure based networks.

Energy utilization in some synchronization schemes is more due to energy hungry equipments like GPS (Global Positioning System) receivers or NTP (Network Time Protocol).

The lifetime or the duration for the nodes which are spread over a large geographical area needs to be taken into account. Sensor nodes have higher degree of failures. Thus the synchronization protocol needs to be more robust to failures and to communication delay.

Sensor nodes need to coordinate and collaborate to achieve a complex sensing task like data fusion. In data fusion the data collected from different nodes are aggregated into a meaningful result. If the sensor nodes lack synchronization among themselves then the data estimation will be inaccurate.

Traditional synchronization protocols try to achieve the highest degree of accuracy. The higher the accuracy, then there will be more requirement for resources. Therefore we need to have trade off between synchronization accuracy and resource requirements based on the application.

Sensor networks span multi hops with higher jitter. So, the algorithm for sensor network clock synchronization needs to achieve multihop synchronization even in the presence of high jitter. Various synchronization protocols which can be found in the literature are Reference Broadcast Synchronization (RBS) and Delay Measurement Time Synchronization protocol.

2.7 Calibration

Calibration is the process of adjusting the raw sensor readings obtained from the sensors into corrected values by comparing it with some standard values. Manual calibration of sensors in a sensor network is a time consuming and difficult task due to failure of sensor nodes and random noise which makes manual calibration of sensors too expensive.

Various Calibration issues in sensor networks are:

A sensor network consists of large number of sensors typically with no calibration interface.

Access to individual sensors in the field can be limited.

Reference values might not be readily available.

Different applications require different calibration.

Requires calibration in a complex dynamic environment with many observables like aging, decaying, damage etc.

Other objectives of calibration include accuracy, resiliency against random errors, ability to be applied in various scenarios and to address a variety of error models.

2.8 Network Layer Issues

Various issues at the network layer are :

Energy efficiency is a very important criterion. Different techniques need to be discovered to eliminate energy inefficiencies that may shorten the lifetime of the network. At the network layer, various methods need to be found out for discovering energy efficient routes and for relaying the data from the sensor nodes to the BS so that the lifetime of a network can be optimized.

Routing Protocols should incorporate multi-path design technique. Multi-path is referred to those protocols which set up multiple paths so that a path among them can be used when the primary path fails.

Path repair is desired in routing protocols when ever a path break is detected. Fault tolerance is another desirable property for routing protocols. Routing protocols should be able to find a new path at the network layer even if some nodes fail or blocked due to some environmental interference.

Sensor networks collect information from the physical environment and are highly data centric. In the network layer in order to maximize energy savings a flexible platform need to be provided for performing routing and data management.

The data traffic that is generated will have significant redundancy among individual sensor nodes since multiple sensors may generate same data within the vicinity of a phenomenon. The routing protocol should exploit such redundancy to improve energy and bandwidth utilization.

As the nodes are scattered randomly resulting in an ad hoc routing infrastructure, a routing protocol should have the property of multiple wireless hops.

Routing Protocols should take care of heterogeneous nature of the nodes i.e. each node will be different in terms of computation, communication and power.

Various type of routing Protocols for WSNs are Sensor Protocols for Information via negotiation (SPIN), Rumor Routing, Direct Diffusion, Low Energy Adaptive Cluster Hierarchy (LEACH), Threshold sensitive Energy Efficient sensor Network protocol (TEEN), Geographic and Energy Aware Routing (GEAR), Sequential Assignment Routing (SAR) and others.

2.9. Transport Layers Issues

End to End reliable communication is provided at Transport layer. The various design issues for Transport layer protocols are :

In transport layer the messages are fragmented into several segments at the transmitter and reassembled at the receiver. Therefore a transport protocol should ensure orderly transmission of the fragmented segments.

Limited bandwidth results in congestion which impacts normal data exchange and may also lead to packet loss.

Bit error rate also results in packet loss and also wastes energy. A transport protocol should be reliable for delivering data to potentially large group of sensors under extreme conditions.

End to End communication may suffer due to various reasons: The placement of nodes is not predetermined and external obstacles may cause poor communication performance between two nodes. If this type of problem is encountered then end to end communication will suffer. Another problem is failure of nodes due to battery depletion.

In sensor networks the loss of data, when it flows from source to sink is generally tolerable. But the data that flows from sink to source is sensitive to message loss. A sensor obtains information from the surrounding environment and passes it on to the sink which in turn queries the sensor node for information.

Traditional transport protocols such as UDP and TCP cannot be directly implemented in sensor networks for the following reasons:

If a sensor node is far away from the sink then the flow and congestion control mechanism cannot be applied for those nodes.

Successful end to end transmissions of packets are guaranteed in TCP but it’s not necessary in an event driven applications of sensor networks.

Overhead in a TCP connection does not work well for an event driven application of sensor networks.

UDP on the other hand has a reputation of not providing reliable data delivery and has no congestion or flow control mechanisms which are needed for sensor networks. Pump Slowly, Fetch Quickly (PSFQ) proposed is one of the popular transport layer protocol.

2.10 Data Aggregation and Data Dissemination

Data gathering is the main objective of sensor nodes. The frequency of reporting the data and the number of sensors which report the data depends on the particular application. Data gathering involves systematically collecting the sensed data from multiple sensors and transmitting the data to the base station for further processing. But the data generated from sensors is often redundant and also the amount of data generated may be very huge for the base station to process it. Hence a method is needed for combining the sensed data into high quality information and this is accomplished through Data Aggregation. Data Aggregation is defined as the process of aggregating the data from multiple sensors to eliminate redundant transmission and estimating the desired answer about the sensed environment, then providing fused information to the base station.

Some design issues in data aggregation are :

Sensor networks are inherently unreliable and certain information may be unavailable or expensive to obtain; like the number of nodes present in the network and the number of nodes that are responding and also it is difficult to obtain complete and up-to date information of the neighboring sensor nodes to gather information.

Making some of the nodes to transmit the data directly to the base station or to have less transmission of data to the base station to reduce energy.

Eliminate transmission of redundant data using meta- data negotiations as in SPIN protocol.

Improving clustering techniques for data aggregation to conserve energy of the sensors.

Improving In-Network aggregation techniques to improve energy efficiency. In-Network aggregation means sending partially aggregated values rather than raw values, thereby reducing power consumption.

Data dissemination is a process by which data and the queries for the data are routed in the sensor network. Data dissemination is a two step process. In the first step, if a node is interested in some events, like temperature or humidity, then it broadcasts its interests to its neighbors periodically and then through the whole sensor network. In the second step, the nodes that have the requested data will send the data back to the source node after receiving the request. The main difference between data aggregation and data dissemination is, in data dissemination all the nodes including the base station can request for the data while in data aggregation all the aggregated data is periodically transmitted to the base station. In addition, data aggregation data can be transmitted periodically, while in data dissemination data is always transmitted on demand. Flooding is one important protocol which includes data dissemination approach.

2.11 Database Centric and Querying

Wireless sensor networks have the potential to span and monitor a large geographical area producing massive amount of data. So sensor networks should be able to accept the queries for data and respond with the results.

The data flow in a sensor database is very different from the data flow of the traditional database due to the following design issues and requirements of a sensor network:

The nodes are volatile since the nodes may get depleted and links between various nodes may go down at any point of time but data collection should be interrupted as little as possible.

Sensor data is exposed to more errors than in a traditional database due to interference of signals and device noise.

Sensor networks produce data continuously in real time and on a large scale from the sensed phenomenon resulting in need of updating the data frequently; whereas a traditional database is mostly of static and centralized in nature.

Limited storage and scarcity of energy are another important constraints that need to be taken care of in a sensor network database but a traditional database usually consists of plenty of resources and disk space is not an issue.

The low level communication primitives in the sensor networks are designed in terms of named data rather than the node identifiers which are used in the traditional networks.

2.12 Architecture

Architecture can be considered as a set of rules and regulation for implementing some functionality along with a set of interfaces, functional components, protocols and physical hardware. Software architecture is needed to bridge the gap between raw hardware capabilities and a complete system.

The key issues that must be addressed by the sensor architecture are:

Several operations like continuous monitoring of the channel, encoding of data and transferring of bits to the radio need to be performed in parallel. Also sensor events and data calculations must continue to proceed while communication is in progress.

A durable and scalable architecture would allow dynamic changes to be made for the topology with minimum update messages being transmitted.

The system must be flexible to meet the wide range of target application scenarios since the wireless sensor networks to not have a fixed set of communication protocols that they must adhere to.

The architecture must provide precise control over radio transmission timing. This requirement is driven by the need for ultra-low power communication for data collection application scenarios.

The architecture must decouple the data path speed and the radio transmission rate because direct coupling between processing speed and communication bit rates can lead to sub-optimal energy performance.

2.13 Programming Models of Sensor Networks

Currently, programmers are too much concerned with low level details like sensing and node to node communication raising a need for programming abstractions. There is considerable research activity for designing programming models for sensor networks due to following issues:

Since the data collected from the surrounding phenomenon is not for general purpose computing a reactive, event driven programming model is needed.

Resources in a sensor network are very scarce, where even a typical embedded OS consuming hundreds of KB is considered too much. So programming models should help programmers in writing energy efficient applications.

Necessity to reduce the run time errors and complexity since the applications in a sensor network need to run for a long duration without human intervention.

Programming models should help programmers to write bandwidth efficient programs and should be accompanied by runtime mechanisms that achieve bandwidth efficiency whenever possible.

2.14 Middleware

A middleware for WSNs should facilitate development, maintenance, deployment and execution of sensing-based applications. WSN middleware can be considered as a software infrastructure that glues together the network hardware, operating systems, network stacks and applications. Various issues in designing a middleware for wireless sensor networks are:

Middleware should provide an interface to the various types of hardware and networks supported by primitive operating system abstractions. Middleware should provide new programming paradigm to provide application specific API’s rather than dealing with low level specifications.

Efficient middleware solutions should hide the complexity involved in configuring individual nodes based on their capabilities and hardware architecture.

Middleware should include mechanisms to provide real time services by dynamically adapting to the changes in the environment and providing consistent data. Middleware should be adaptable to the devices being programmed depending on the hardware capabilities and application needs.

There should be transparency in the middleware design. Middleware is designed for providing a general framework whereas sensor networks are themselves designed to be application specific. Therefore some tradeoff is needed between generality and specificity.

Sensor network middleware should support mobility, scalability and dynamic network organization. Middleware design should incorporate real time priorities. Priority of a message should be assigned at runtime by the middleware and should be based on the context.

Middleware should support quality of service considering many constraints which are unique to sensor networks like energy, data, mobility and aggregation.

Security has become of paramount importance with sensor networks being deployed in mission critical areas like military, aviation and in medical field.

2.15 Quality of Service

Quality of service is the level of service provided by the sensor networks to its users. Quality of Service (QoS) for sensor networks as the optimum number of sensors sending information towards information-collecting sinks or a base station.

The QoS routing algorithms for wired networks cannot be directly applied to wireless sensor networks due to the following reasons:

The performance of the most wired routing algorithms relies on the availability of the precise state information while the dynamic nature of sensor networks make availability of precise state information next to impossible.

Nodes in the sensor network may join, leave and rejoin and links may be broken at any time. Hence maintaining and re-establishing the paths dynamically which is a problem in WSN is not a big issue in wired networks.

Various Quality of Service issues in sensor networks are:

The QoS in WSN is difficult because the network topology may change constantly and the available state information for routing is inherently imprecise.

Sensor networks need to be supplied with the required amount of bandwidth so that it is able to achieve a minimal required QoS.

Traffic is unbalanced in sensor network since the data is aggregated from many nodes to a sink node. QoS mechanisms should be designed for an unbalanced QoS constrained traffic.

Many a time routing in sensor networks need to sacrifice energy efficiency to meet delivery requirements. Even though multihops reduce the amount of energy consumed for data collection the overhead associated with it may slow down the packet delivery. Also, redundant data makes routing a complex task for data aggregation thus affecting Quality of Service in WSN.

Buffering in routing is advantageous as it helps to receive many packets before forwarding them. But multihop routing requires buffering of huge amount of data. This limitation in buffer size will increase the delay variation that packets incur while traveling on different routes and even on the same route making it difficult to meet QoS requirements.

QoS designed for WSN should be able to support scalability. Adding or removing of the nodes should not affect the QoS of the WSN.

2.16 Security

Security in sensor networks is as much an important factor as performance and low energy consumption in many applications. Security in a sensor network is very challenging as WSN is not only being deployed in battlefield applications but also for surveillance, building monitoring, burglar alarms and in critical systems such as airports and hospitals. Since sensor networks are still a developing technology, researchers and developers agree that their efforts should be concentrated in developing and integrating security from the initial phases of sensor applications development; by doing so, they hope to provide a stronger and complete protection against illegal activities and maintain stability of the systems at the same time.

Following are the basic security requirements to which every WSN application should adhere to :

Confidentiality is needed to ensure sensitive information is well protected and not revealed to unauthorized third parties. Confidentiality is required in sensor networks to protect information traveling between the sensor nodes of the network or between the sensors and the base station; otherwise it may result in eavesdropping on the communication.

Authentication techniques verify the identity of the participants in a communication. In sensor networks it is essential for each sensor node and the base station to have the ability to verify that the data received was really sent by a trusted sender and not by an adversary that tricked legitimate nodes into accepting false data. A false data can change the way a network could be predicted.

Lack of integrity may result in inaccurate information. Many sensor applications such as pollution and healthcare monitoring rely on the integrity of the information to function; for e.g., it is unacceptable to have improper information regarding the magnitude of the pollution that has occurred.

One of the many attacks launched against sensor networks is the message reply attack where an adversary may capture messages exchanged between nodes and reply them later to cause confusion to the network. So sensor network should be designed for freshness; meaning that the packets are not reused thus preventing potential mix-up.

In sensor networks secure management is needed at the base station level, since communication in sensor network ends up at the base station.

CHAPTER 3

ATTACKS ON WIRELESS SENSOR NETWORK

3.1 Introduction

Many sensor network routing protocols are quite simple, and for this reason are sometimes even more susceptible to attacks against general ad-hoc routing protocols. Most network layer attacks against sensor networks fall into one of the following categories:

Spoofed, altered, or replayed routing information

Selective forwarding

Sinkhole attacks

Sybil attacks

Wormholes

HELLO flood attacks

Acknowledgement spoofing

3.2 Spoofed, altered, or replayed routing information

The most direct attack against a routing protocol is to target the routing information exchanged between nodes. By spoofing, altering, or replaying routing information, adversaries may be able to create routing loops, attract or repel network traffic, extend or shorten source routes, generate false error messages, partition the network, increase end-to-end latency, etc.

3.3 Selective forwarding

Multi-hop networks are often based on the assumption that participating nodes will faithfully forward receive messages. In a selective forwarding attack, malicious nodes may refuse to forward certain messages and simply drop them, ensuring that they are not propagated any further. A simple form of this attack is when a malicious node behaves like a black hole and refuses to forward every packet she sees. However, such an attacker runs the risk that neighboring nodes will conclude that she has failed and decides to seek another route. A more subtle form of this attack is when an adversary selectively forwards packets. An adversary interested in suppressing or modifying packets originating from a select few nodes can reliably forward the remaining traffic and limit suspicion of her wrongdoing. Selective forwarding attacks are typically most effective when the attacker is explicitly included on the path of a data flow. However, it is conceivable an adversary overhearing a flow passing through neighboring nodes might be able to emulate selective forwarding by jamming or causing a collision on each forwarded packet of interest.

3.4 Sinkhole attacks

In a sinkhole attack, the adversary’s goal is to lure nearly all the traffic from a particular area through a compromised node, creating a metaphorical sinkhole with the adversary at the center. Because nodes on, or near, the path that packets follow have many opportunities to tamper with application data, sinkhole attacks can enable many other attacks (selective forwarding, for example). Sinkhole attacks typically work by making a compromised node look especially attractive to surrounding nodes with respect to the routing algorithm. For instance, an adversary could spoof or replay an advertisement for an extremely high quality route to a base station. Some protocols might actually try to verify the quality of route with end-to-end acknowledgements containing reliability or latency information. In this case, a laptop-class adversary with a powerful transmitter can actually provide a high quality route by transmitting with enough power to reach the base station in a single hop, or by using a wormhole attack. Due to either the real or imagined high quality route through the compromised node, it is likely each neighboring node of the adversary will forward packets destined for a base station through the adversary, and also propagate the attractiveness of the route to its neighbors. Effectively, the adversary creates a large "sphere of influence", attracting all traffic destined for a base station from nodes several (or more) hops away from the compromised node. One motivation for mounting a sinkhole attack is that it makes selective forwarding trivial. By ensuring that all traffic in the targeted area flows through a compromised node, an adversary can selectively suppress or modify packets originating from any node in the area. It should be noted that the reason sensor networks are particularly susceptible to sinkhole attacks is due to their specialized communication pattern. Since all packets share the same ultimate destination (in networks with only one base station), a compromised node needs only to provide a single high quality route to the base station in order to influence a potentially large number of nodes.

3.5 The Sybil attack

An insider cannot be prevented from participating in the network, but she should only be able to do so using the identities of the nodes she has compromised. Using a globally shared key allows an insider to masquerade as any (possibly even nonexistent) node. Identities must be verified. In the traditional setting, this might be done using public key cryptography, but generating and verifying digital signatures is beyond the capabilities of sensor nodes. One solution is to have every node share a unique symmetric key with a trusted base station. Two nodes can then use a Needham-Schroeder like protocol to verify each other’s identity and establish a shared key. A pair of neighboring nodes can use the resulting key to implement an authenticated, encrypted link between them. In order to prevent an insider from wandering around a stationary network and establishing shared keys with every node in the network, the base station can reasonably limit the number of neighbors a node is allowed to have and send an error message when a node exceeds it. Thus, when a node is compromised, it is restricted to (meaningfully) communicating only with its verified neighbors. This is not to say that nodes are forbidden from sending messages to base stations or aggregation points multiple hops away, but they are restricted from using any node except their verified neighbors to do so. In addition, an adversary can still use a wormhole to create an artificial link between two nodes to convince them they are neighbors, but the adversary will not be able to eavesdrop on or modify any future communications between them.

3.6 Wormholes

In the wormhole attack, an adversary tunnels messages received in one part of the network over a low latency link and replays them in a different part. The simplest instance of this attack is a single node situated between two other nodes forwarding messages between the two of them. However, wormhole attacks more commonly involve two distant malicious nodes



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now