An Outline Of Wireless Sensor Networks

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

INTRODUCTION

AN OUTLINE OF WIRELESS SENSOR NETWORKS

In the present technological scenario it is imperative to collect information from a particular location or region by deploying large number of interconnected tiny microsensors and this objective is by and large realizable with the recent advancements. These microsensors are low power devices with multiple sensing capability, programmable computing and communication facilities and are sufficiently reliable in monitoring and controlling a wide range of applications, covering military, industry, agriculture, biomedical, environmental etc. As the microsensors are economical and are easily deployable their impact in both wired and wireless applications is considerable, particularly in remote monitoring applications.

Each sensor node is consisting of four basic components viz. a sensor, a processor, a radio unit and a power source. All these four components fit into matchbox-sized or even smaller module. Due to obvious constraints the computational capabilities, battery life and the communication range are limited. A collection of few hundred to thousand sensors constitute a sensor networks and these nodes are deployed densely and coordinate amongst themselves to accomplish the desired task.

The three major functions of a sensor node are

Sensing: To respond to a change in the parameter of interest

Processing: Collecting and processing the information in association with neighboring sensors

Transmission: Sending the processed information or the information received from neighboring nodes to the base station directly or through other neighboring nodes.

In the modern era, the Wireless Sensor Network (WSN) is employed in all applications that include commercial, environmental monitoring, bio-informatics, security, military, missile guidance, tracking etc. and the growth of WSN is tremendous in the recent past. WSN consists of a large number of cost effective, low communication range tiny sensors which collectively coordinate among themselves to transmit data from any point in the network to a sink node or beacon. A node in WSN has following four major components: Sensor, Transreceiver, Processing Unit and Limited Power Source.

Sensor networks represent a weighty development in excess of the conventional sensors, which are presented below (C. Intanagonwiwat et al. 2000):

When the source of the signal is farther away from the processing station, large number of sensors are used in between to transmit data and complex techniques are required in this scenario to differentiate the actual signals from ecological noise.

When data are to be collected from a wide and unreachable area, large number of sensors are deployed by normally air-dropping. The network thus formed is highly ad hoc in nature and is capable of broadcasting the time series of the sensed events to a central node where data is aggregated and processed.

It is obvious that the fashion in which the sensor nodes are distributed over the given area cannot be predetermined and hence, in such network where the nodes are randomly distributed, the protocol applied should be of self organizable. With the available onboard processor each node processes locally the raw information received by it and transmits only necessary part of it so that the data transmitted may be easily aggregated and processed further at the sink node. This coordinating and inter supporting nature of sensor networks make them exceptional.

There are several proven protocols available for conventional ad hoc networks but they are found not well suited for sensor networks in view of their unique features and special application oriented requirements. Though many protocols and algorithms have been proposed for traditional wireless ad hoc networks, they are not well suited for the application requirements of sensor networks with unique features. The basic differences between sensor networks and typical ad hoc networks are given below (C. Perkins, 2000): lame

The number of nodes in a sensor network is much higher than that of conventional ad hoc networks.

Sensor nodes are relatively densely deployed.

Sensor nodes are more prone to failures.

The topology of Sensor network is dynamic.

Sensor Nodes uses broadcast communication paradigm where as ad hoc networks are on point-to-point basis.

The power, computational capability and memory of sensor nodes are very limited.

As the number of nodes involved in WSN is high and involves large overhead, global identification (ID) is not preferred in contrast to ad hoc networks.

In general, as the capacity of the irreplaceable power source for a WSN node is limited, optimal usage of power is required. Consequently, power conservation is given higher priority than Quality-of-Service (QoS) as against traditional network concepts. Hence, multi-hop communication is preferred in sensor network architecture than single-hop communication to prolong the life of the nodes. In addition, multi-hop communication is found more suitable for sensor networks as the nodes are geographically placed very close to each other which aids multi-hop communication with relatively lower power than that of single-hop and helps sensor networks to overcome signal propagation effects experienced in long distance wireless communication.

Hence, there is always a trade off between prolonged lifetime and lower throughput/high transmission delay in sensor networks.

FACTORS INFLUENCING SENSOR NETWORK DESIGN

The design of a sensor network system is influenced by several factor and some of them are

Fault tolerance

Scalability

Production costs

Operating Environment

Sensor network topology

Hardware constraints

Transmission media

Power consumption

Fault Tolerance

There are several reason for the failure of a sensor node such as power drain, physical damage, malfunction, moving out of communication range etc., and such failures should not affect the overall process so as to have better reliability or fault tolerance (G. Hoblos et al. 2000; D. Nadig and S.S. Iyengar, 1993). Over the given time interval (0:t) to capture the probability of not having failures, Poissoonss distribution is used and the fault tolerance or reliability factor of node, given by is modeled as (Nadig and Iyengar, 1993)

where - Failure rate of sensor node and - time period

The level of fault tolerance may be selected by the type application and the associated constraints. For example the fault tolerance of sensor networks used in battle field for surveillance and detection must be high as the nodes are highly prone to failures by attacks where as a sensor network implemented for environmental monitoring may have a relatively low tolerance, due to obvious reasons.

Scalability

A system is said to be scalable if its availability and fault tolerance are within the acceptable thresholds when the load or subsystems is increased in the system (Lee et al, 1998). The importance of scalability as a design factor is due to the fact that such systems should ideally be able to add more systems or sub-systems without having to re-engineer the existing architecture.

Scalability measures the density of the sensor nodes. The availability and interval of failure of the entire network also increases due to increase in node density (as some nodes can act as redundant back-up nodes). This method of scaling is sometimes referred to as scale-out, where more load (nodes) is added to a given system (Michael et al, 2007). In scale-up, one can add more resources (memory, processor power, low power usage etc.) to a single sensor node so as to increase its coverage which in turn can affect the scalability of the entire network.

The number of sensor nodes utilized in analyzing an incident may be in the order of hundreds or thousands. These ranges could increase to an enormous level based on the specific application. Novel approaches should be established to deal with the number of nodes. The approaches must also take into consideration on the high density nature of the sensor networks. The density could range from few sensor nodes to few hundred sensor nodes in a region, which can be less than 10 m in diameter (S. Cho and A. Chandrakasan, 2000). The density can be calculated according to (N. Bulusu et al. 2001) as

where N represents the number of scattered sensor nodes in region A and R denotes the radio transmission range. Generally, provides the number of nodes within the transmission radius of each node in region A.

Moreover, the number of nodes in a region specifies the node density. The node density is based on the application in which the sensor nodes are deployed (E. Shih et al. 2001). Basically, the density can be as high as 20 sensor nodes/m3 (E. Shih et al. 2001; E.M. Petriu et al. 2000; A. Cerpa et al. 2001).

Production costs

For certain critical applications sensor network is economical. However, if the cost of deploying a sensor network for certain applications is costilier than that of more reliable and higher fault tolerant conventional networks, then it is an issue. As a sensor network is comprising of a huge number of sensor nodes, the cost of a single node is critical to estimate the overall cost of the network. This cost factor results in giving critical importance to each sensor node (J.M. Rabaey et al. 2000; J. Rabaey et al. 2000).

Operating Environment

Sensor nodes are densely deployed either very close or directly inside the phenomenon to be observed. Therefore, they usually work unattended in remote geographic areas. They may be working

in busy intersections,

in the interior of a large machinery,

at the bottom of an ocean,

inside a twister,

on the surface of an ocean during a tornado,

in a biologically or chemically contaminated field,

in a battlefield beyond the enemy lines,

in a home or a large building,

in a large warehouse,

attached to animals,

attached to fast moving vehicles, and

in a drain or river moving with current.

The above list gives an idea about under which conditions sensor nodes are expected to work. They work under high pressure in the bottom of an ocean, in harsh environments such as debris or a battlefield, under extreme heat and cold such as in the nozzle of an aircraft engine or in arctic regions, and in an extremely noisy environment such as under intentional jamming.

Hardware Constraints

As discussed at the beginning of this chapter a sensor node comprises of four fundamental modules viz., sensing unit, processing unit, transceiver unit and power unit which are clearly depicted in figure XX. Additional units such as location identifying system, mobilize etc., may also be included, based on the requirement and application. In what follows the functionalities of the basic units are discussed briefly.

Sensing Unit: In the sensing unit the first sub module is a basic sensor or a typical transducer which collects the required information and converts it in to equivalent electrical signal after necessary signal conditioning process, if necessary. This analog signal is then passed on to the second sub module, Analog to Digital Converter (ADC), where the signal is converted in to equivalent digital signal since, only digital information can be processed in the processing unit to which the signal are fed subsequently.

Processing Unit: The processing unit handles the process that facilitates the sensor node to coordinate and collobarate with neighbouring nodes to execute the allocated sensing task with the help of the available small amount of memory.

Transreceiver Unit: The transreceiver unit is responsible for the transmission and reception of information in a predefined pattern to and from the neighbouring nodes and there by establishes a link with the network.

Power Unit: This unit supplies power to all other associated units discussed above. In general this unit is irreplaceable and has limited capacity.

Figure 1.1: Components of a Sensor Node

In general a node in a sensor network should be able to localize itself as the the sensor network routing approaches and sensing assignments need the information about the location of data collection with high accuracy and a location identifying system may be required in this process. It may be necessary in certain applications to shift the node from one location to another to accomplish the task and a mobilize may be helpful in such cases (C. Intanagonwiwat et al. 2000; G.J. Pottie, W.J. Kaiser, 2000).

A sensor node, for better operation should posses the following characteristics (J.M. Kahn et al. 1999)

Consume extremely low power

Operate in high volumetric densities

Economical and easily dispensable

Be autonomous and operate unattended

Be adaptive to the environment

In general the sensors nodes are unapprochable once dropped over unreachable terrains and other locations and the longevity of a sensor node depends on the power source. The tiny size of nodes restrict not only size of the power source but also the life time of the power source (G.J. Pottie et al. 2000) and hence, it is required to regulate the power drawn from it. For example to extend the life of Wireless Integrated Network Sensors (WINS) the average system supply current must be less than (S. Vardhan et al. 2000). In addition, energy scavenging techniques such as solar cells for extracting energy from the environment (J.M. Rabaey et al. 2000) may also be used to extend the lifetime.

Turning on and off of radio circuits consumes much energy as the presently available commercial radio techniques are not efficient enough. This problem is further aggravated as the designing of energy competent and low duty cycle radio circuits are still a difficult task (E. Shih et al. 2000). In spite of the accessability of advanced computational powers in tiny processors, the proper and suitale processing and memory units are still not available for sensor nodes (A. Perrig et al. 2001).

Sensor Network Topology

In general large number of sensor nodes is deployed over inaccessible area and so, they are not approachable. As the density of the deployment is so high, fixing the topology is a tough task and the frequent failure of nodes further complicates the problem (C. Intanagonwiwat et al. 2000; E. Shih et al. 2001). Hence, a better attentive approach is required to handle high density sensor networks. The different phases in topology maintenance are discussed in the following section.

Pre-deployment and deployment phase

The modes of Sensor nodes deployment

Air dropping from a plane

Delivering in an artillery shell, rocket or missile

Throwing by a catapult (from a shipboard, etc.)

Placing one by one either by a human or a robot

Even though the total number of sensors and their mode of deployment deviate from general engineered deployment plan, the approaches for initial deployment must satisfy the following conditions

Minimize the installation cost

Remove the requirement for any pre-organization and pre-planning

Increase the flexibility of arrangement

Support self-organization and fault tolerance

Post-deployment phase

Once deployed the nodes organize themselves to form certain topology and the change in the topology during post deployment phase is due to the following alterations that may happen in sensor nodes (S. Meguerdichian et al. 2001)

Position

Reachability (due to jamming, noise, moving obstacles, etc.)

Available energy

Malfunctioning

Task details

Sensor nodes may be deployed statically. However, device failure is very common because of energy depletion or demolition. There could also be sensor networks with highly mobile nodes. In addition, sensor nodes and the network experience changeable task dynamics may get prone to intentional jamming. Thus, sensor network topologies are prone to frequent alterations after deployment.

Redeployment of additional nodes phase

Sensor nodes need to be added after post deployment phase to maintain the network functions and to increase the reliability. Sensor nodes that are further added may be re-deployed at any time to replace the faulty nodes or due to alterations in task dynamics. Addition of new nodes requires re-organization in the network.

Transmission Media

Communicating nodes in a multihop sensor network are connected by a wireless medium. These links in general are created by radio, infrared or optical media. The availability of the selected transmission medium across the world is crucial to facilitate global functioning of these networks.

For sensor networks, a tiny, economical and ultra low power transceiver is preferred for obvious reasons. According to A. Porret et al., (2000), some hardware constraints restrict the option of a carrier frequency for such transceivers to the very high frequency range. The authors also recommended the utilization of the 433 MHz ISM band in Europe and the 915 MHz ISM band in North America. The design problems of transceiver in these two bands are discussed in (T. Melly et al. 1999; P. Favre et al. 1998). The major benefits of utilizing the ISM bands are the free radio, large spectrum allocation and global accessibility.

Though constrained by several rules and conditions such as power limitations and destructive interference from existing applications, the ISM bands do not have a specific standard. This feature provides additional freedom for the deployment of power saving approaches in sensor networks. These frequency bands are also considered as unregulated frequencies. The low-power sensor device described in (A. Woo et al. 2001), utilizes a single channel RF transceiver operating at 916 MHz.

The selection of a transmission media for a sensor network is further limited by the more specific requirements of a sensor network application. For example, marine applications may require aqueous transmission medium which would need long-wavelength radiation that can go through the water surface. On the other hand, battlefield application requires greater interference. But, a sensor antenna does not have height and radiation power as in other wireless devices. Thus, the preference of transmission medium must be supported by robust coding and modulation approaches that efficiently model these vastly divergent channel features.

Data processing

Energy outflow in data processing is much less compared to data communication and Pottie et al (2000) have efficiently described this disparity. It is said that local data processing is very important in reducing power consumption in a multihop sensor network.

Thus, a sensor node has built-in computational capabilities and is good at interacting with its surroundings. In view of the restrictions of cost and size, Complementary Metal Oxide Semiconductor (CMOS) technology has been introduced for the microprocessor section of the sensor nodes. However, this technique has inbuilt limitations on energy efficiency. A CMOS transistor pair takes power every time it is switched. This switching power is proportional to the switching frequency, device capacitance (which further depends on the area) and the square of the voltage swing. Minimizing the supply voltage is thus a competent way of reducing power consumption in the active state. Dynamic voltage scaling, explored in (R. Min et al. 1995; T. Pering et al. 1998), aims to adapt processor power supply and operating frequency to match workloads. When a microprocessor deals with time-varying computational load, minimizing the operating frequency during periods of minimizing activity results in a linear decrease in power consumption and minimizing the operating voltage gives us quadratic gains as well, but at the cost of peak performance. Though efficient energy gains may be attained during peak performances, it is not possible to have such operating point always. Hence, the processor’s operating voltage and frequency are dynamically adapted to instant processing necessities to have better energy management.

Sinha et al (2001) proposed a workload forecast approach based on adaptive filtering of the previous workload profile and examined several filtering approaches. Several low power CPU organization approaches are also discussed in (K. Govil et al. 1995; J. Lorch et al. 1996). The power consumption in data processing (Pp) may be formulated as follows:

where C denotes the total switching capacitance; , the voltage swing and , the switching frequency and the power loss due to leakage current is given in the second part (A. Sinha et al. 2001). The lowering of threshold voltage to satisfy the performance requirements results in high sub-threshold leakage currents. Together with the low duty cycle process of the microprocessor the associated power loss also becomes considerable (E. Shih et al. 2001) in a sensor node.

It is to be observed that there may be certain added circuitry for data encoding and decoding. Application specific integrated circuits may also be used in specific cases. In all these cases, the design of sensor network approaches is affected by the equivalent power expenditures.

SENSOR NETWORKS COMMUNICATION ARCHITECTURE

The sensor nodes are generally distributed in a sensor field randomly as shown in Figure 2. Each of these distributed sensor nodes has the abilities to gather and route data back to the sink node through a multihop infrastructure less framework. The sink may communicate with the task manager node through the Internet or Satellite to transmit the data to the user as shown Figure 1.2.

C

B

A

D

E

Internet & Satellite

Task Manager Node

Sink

User

Sensor Field

Sensor Nodes

Figure 1.2: Sensor nodes scattered in a sensor field

The protocol stack used by the sink and all sensor nodes is given in Figure 1.3. This protocol stack links power and routing awareness, combines data with networking protocols, communicates power effectively via the wireless medium, and supports cooperative attempts of sensor nodes. The protocol stack comprises of the application layer, transport layer, network layer, data link layer, physical layer, power management plane, mobility management plane, and task management plane (Qinghua Wang and Ilangko Balasingham, 2010).

Based on the sensing assignments, various types of application softwares may be developed and used on the application layer. The transport layer supports in sustaining the data flow for the required set of sensor network applications. The network layer controls routing of data provided by the transport layer. Since the surrounding is noisy and sensor nodes may be mobile, MAC protocol must be power aware and be capable of avoiding collision with the neighbors’ broadcast. The physical layer deals with the requirements of a simple but robust modulation, transmission and receiving approaches. Moreover, power, movement, and task distribution among the sensor nodes are controlled by the respective management planes. These planes would support the sensor nodes in organizing the sensing task and minimizing the overall power consumption.

Power Management Plane

Mobility Management Plane

Task Management Plane

Application Layer

Transport Layer

Network Layer

Data Link Layer

Physical Layer

Figure 1.3. The sensor networks protocol stack.

This research work mainly concentrates on the transport layer and the protocols used in the transport layer.

A THOROUGH ANALYSIS OF TRANSPORT LAYER

The importance and the functions of transport layer are clearly discussed in the literature (J.M. Rabaey et al. 2000). This layer is particularly required when the system is intended to be accessed via Internet or other external networks. But, there have not been many techniques to discuss the issues related to the transport layer of a sensor network in the literature. TCP with its present transmission window schemes does not match with the extreme features of the sensor network environment. An approach such as TCP splitting (A. Bakre and B.R. Badrinath, 1995) is required to construct sensor networks which interact with other networks such as Internet. TCP connections terminate at sink nodes, and a particular transport layer protocol can deal with the communications between the sink node and sensor nodes. Therefore, the communication between the user and the sink node is by UDP or TCP through the Internet or Satellite. However, the communication between the sink and sensor nodes may be purely by UDP type protocols, as each sensor node has restricted memory (Akyildiz et al, 2002).

Unlike protocols such as TCP, end-to-end communication approaches in sensor networks do not depend on global addressing. The attributes such as power consumption and scalability, and features like data centric routing require a different type of handling in transport layer of sensor networks. Therefore, the proposed new kinds of transport layer protocols require due concentration to effectively handle the unique requirements of sensor networks.

The establishment of transport layer protocols is a tough task as the sensor nodes are constrained by the inherent factors like restricted power and liited power. Therefore, each sensor node cannot accumulate large amount of data like a server in the Internet, and acknowledgements are too costly for sensor networks. Thus, the new approaches that split end-to-end communication probably at the sinks may be required and so UDP type protocols are used in the sensor network where as conventional TCP/UDP protocols in the Internet or Satellite network.

The transport layer handles the end-to-end transportation of packets across a network. The main aim is to connect application processes running on end hosts as flawlessly as possible, as if the two end applications were connected by a reliable dedicated link, thus making the network invisible. To carry out this task, it must be capable of handling several nonidealities of real networks such as shared links, data loss and duplication, contention for resources, and variability of delay.

MULTIPLEXING: MANAGING LINK SHARING

One fundamental purpose of the transport layer is multiplexing and demultiplexing. Generally, there may be a number of application processes running on one host at the same time. The network layer only deals with sending a stream of data out of the computer. The transport layer collects data from various applications into a single stream before issuing it to the network layer i.e., multiplexing. In the same way, when the data is received from outside, the transport layer has again issued that data to different applications such as a web browser or e-mail client in a process i.e. demultiplexing. Figure 1.4 shows the data flow directions for multiplexing (sending) and demultiplexing (receiving).

Application

Transport

Network

Link

Physical

Receiving

ing

Sending

ing

Figure 1.4: Internet Protocol Stack

Multiplexing is attained through partitioning data flows from the application into short packets, also called segments. Packets from a number of flows can be interleaved as the packets are sent to the network layer. Demultiplexing is attained through assigning each communication flow through an exclusive identifier. The sender spots each packet with its flow’s identifier, and the receiver divides incoming packets into flows based on their identifiers. In the Internet, these identifiers comprise transport layer port numbers. Moreover, the network layer deals with the sender and receiver apart from recognizing the number of transport layer protocols being used.

Multiplexing and demultiplexing are one of the most basic assignments of the transport layer. Different transport layer protocols offer different subsets of the possible services. Nevertheless, all transport layer protocols do execute multiplexing and demultiplexing.

There are two leading kinds of transport layer protocol used in the Internet (Boussen, S., 2009) namely

User Datagram Protocol (UDP)

Transmission Control Protocol (TCP)

UDP offers unreliable and connectionless service to the upper layers and on the other hand, TCP provides reliable and connection-based services.

UDP: Multiplexing Only

UDP is a very simple and connectionless protocol. All UDP packets are considered independently by the transport layer, instead of being part of an ongoing flow. In addition to minor error checking, UDP basically does multiplexing and demultiplexing. There is no assurance that packets will be received in the same order they are sent. It also does not control its transmission rate. Generally, the basic aspect in the design of UDP is to allow applications to have more control over the data sending process and to minimize the delay associated with setting up a connection. This is preferred characteristic for some delay-sensitive purposes such as streaming video and Internet telephony. Usually, applications that can endure some data loss or corruption are mostly sensitive to delay and UDP protocol is most suitable for such applications.

TCP: Reliable Connection-Oriented Transport

Unlike UDP, TCP offers a connection-oriented service by sending data as a stream of interrelated packets, making concepts such as the order of packets significant. Especially, TCP offers dependable service to upper layer applications, assuring that not only the packets are correctly received but also in the order in which they were sent. Initially while establishing connection, TCP utilizes a three-way handshake to promote a connection between sender and receiver, in which they agree on what protocol parameters are to be used.

TCP obtains data from the application as a single stream (huge file), and partitions it into a series of packets. Though TCP tries to utilize huge packets to reduce overhead the size is limited by the factor known as Maximum Transfer Unit (MTU), the maximum size that the network can carry effectively and TCP is accountable for selecting the accurate size using path-MTU-discovery (PMTUD) technique. On the contrary, in UDP data already segmented into packets is given, and so it is the application’s task to examine MTU limitations.

TCP is the most general transport protocol in the Internet; measurements indicate that it accounts for about 80% of the traffic (1). Applications such as file transmission, Web browsing and e-mail utilize TCP for its capability to transmit continuous streams of data consistently, and as many firewalls do not correctly pass other protocols. The main features of TCP, such as reliable transmission and congestion control are clearly discussed in greater detail in the following sections A and B.

Reliable Transmission: Managing Loss, Duplication, And Reordering

When the fundamental network layer does not assure the delivery of all packets, attaining reliable transmission among the unreliable service becomes a critical task. Reasons for packet loss include transient routing loops, congestion of a resource, or physical errors that were not successfully corrected by the physical layer or link layer.

This problem is very similar to that faced by the link layer. The difference is that the link layer operates over a single unreliable physical link to make it appear reliable to the network layer, whereas the transport layer operates over an entire unreliable network to make it appear reliable to the application. For this reason, the algorithms employed at the transport layer are very similar to those employed at the link layer, and they are reviewed briefly here.

ARQ (Automatic Repeat-reQuest) is the basic mechanism to deal with data corruption. When the receiver receives a correct data packet, it will send out a positive acknowledgment (ACK); when it detects an error, it will send out a negative acknowledgment (NAK). Because physically corrupted packets are usually discarded by the link layer, the transport layer will not directly observe that a packet has been lost, and hence, many transport layer protocols do not explicitly implement NAKs. A notable exception is multicast transport protocols. Multicast protocols allow the sender to send a single packet, which is replicated in the network to reach multiple receivers, possibly numbering in the thousands. If each sender sent an ACK for every packet, the receiver would be flooded. Instead, if receivers send only NAKs when they believe packets are missing, the network load is greatly reduced. TCP implicitly regards ACKs for three or more out-of-order packets (resulting in "duplicate ACKs") as forming a NAK.

An interesting problem immediately arises. If a sender sends a packet after making sure that the previously transmitted packet has been received correctly by receiving ACK from the receiver, then system is ineffifient as the resources are underutilized. In such cases the sender would only be able to send only one packet per round trip time, the time lapsed between the moment the packet is transmitted by the receiver and the moment the it receives back the ACK sent by the receiver. For example, consider a 1 Gbit/s (1,000,000,000 bits per second) connection between two hosts that are separated by 1500 km and therefore have a round-trip distance of 3000 km. Sending a packet of size 10 kbit takes only, whereas the round-trip time cannot be smaller than the physical distance divided by the light of speed (300,000 km per second), which in this case is 10 ms. In other words, in this case, the utilization of the sender is about 0.1%, which is clearly not acceptable. This is the motivation for the general sliding window algorithm that will be discussed in the following subsection.

Congestion Control: Managing Resource Contention

Congestion occurs in a network because flows sought to use beyond the admissible limit of the network. Initially the transport layer protocols were designed to operated at a speed dictated by the receivers data processing capability. To cope with the speed of the receiver the transport layer implements flow control mechanism to slow down the sender whenever necessary (V. Jacobson, 1988) to avoid congestion. Despite such measures, during 1980s the Internet suffered from congestion collapses. This was due to the resending of the packets by sliding window mechanism, even when the receivers were not overloaded, leading the network to needless overloading and driving the network to the point of inoperability. As a result, a set of rules was proposed by Jacobson (1988) for senders to control the size of the windows to limit their aggregate sending rate while maintaining an approximately fair allocation of rates.

Recall from the previous section ‘A’ that senders use W > 1 to increase the utilization of the network.

The two important factors considered in congestion control are the ideal rate allocation for each flow in a network and a practical method to achieve it using distributed control. Achieving the latter is difficult due to the decentralized nature of the Internet and the senders are not aware of the capacity of the links that are being used, the number of flows sharing those links and the duration of the flow. Links neither have the knowledge other links being used by the flows. Further, arrival of a new flow is highly unpredictable.

Figure 1.5 depicts a typical scenario wherein there are two flows use three links each, out of which one is shared among them.

Figure 1.5: Two flows sharing a link, and also using nonshared links

There are two main phases of a congestion control algorithm: slow start and congestion avoidance in which the latter is punctuated by short periods of retransmission and loss recovery. TCP Reno, one of the standard TCP algorithms in which the congestion is indicated by packet loss, is considered here to discuss both the phases given above.

Whenever a TCP connection begins, it starts with a slow start phase with the initial window size is set to two packets and subsequently the window size is doubled for every round-trip time. This process continues until it observes a packet loss or the window size reaches a predefined threshold level called as slow start threshold. On sensing a loss the window size is halved and in either case the system enters into congestion avoidance phase. During this phase to have a fair allocation of window size, it performs Additive Increase Multiple Decrease (AIMD), which was proposed by Chiu and Jain (1989) and implemented over Internet by Jacobson (1988).

If a loss occurs, the window is then halved, and in either case, the system enters the congestion avoidance phase. Note that the sender increases its transmission rate exponentially during the slow start. In the congestion avoidance phase, the sender does what is known as Additive Increase Multiplicative Decrease (AIMD) adjustment. This was first proposed by Chiu and Jain (1989) as a means to obtain fair allocation, and implemented in the Internet by Jacobson (1988). Every round-trip time, if all packets are successfully received, the window is increased by one packet. However, when there is a loss event, then the sender will half its window. Because large windows are reduced by more than small windows, AIMD tends to equalize the size of the windows of flows sharing a congested link (Chiu and Jain, 1989).

Finally, if a timeout occurs, the sender starts from slow start again. Figure 4 shows how a window evolves along with time in TCP Reno.

In general, the basic engineering intuition behind most congestion control protocols is to start probing the network with a low transmission rate, quickly ramp up initially, then slow down the pace of increase, until an indicator of congestion occurs and upon sensing such events, transmission rate is reduced. Often packet loss or queuing delay (Brakmo and Peterson, 1995) is used as congestion indicators, and packet loss events are in turn inferred from local measurements such as three duplicated acknowledgments or timeout. These design choices are clearly influenced by the views of wire-line packet-switched networks, in which congestion is the dominant cause of packet loss.

Figure 1.6: TCP Reno Window Trajectory

Initially, the choice of the ramp-up speed and congestion indicators has mostly been based on engineering intuition. However, the recent developments in predictive models of congestion control have paved way for a more systematic design and tuning of the protocols.

This window adaptation algorithm is combined with the sliding window transmission control, to form the whole window-based congestion control mechanism, as illustrated in Figure 1.7. The transmission control takes two inputs, the window size and the acknowledgments from the network. The window size is controlled by the congestion control algorithm such as TCP Reno, which updates the window, based on the estimated congestion level in the network. In summary, with window-based algorithms, each sender controls its window size with an upper bound on the number of packets that have been sent but not acknowledged. As pointed out by Jacobson (1988), the actual rate of transmission is controlled or "clocked" by the stream of receiving acknowledgments (ACKs).

Network

Transmission Control

Window Control

Congestion Estimator Control

Acknowledgements

Rate

Figure 1.7: Window-based congestion control

A new packet is transmitted only when an ACK is received, thereby ideally keeping the number of outstanding packets constant and equal to the window size.

TIMING RESTORATION: MANAGING DELAY VARIATION

In most cases, it is desirable for the transport layer to pass data to the receiving application as soon as possible. The notable exception to this is streaming audio and video signals. For these applications, the temporal spacing between packets is important; if audio packets are processed too early, the sound becomes distorted. However, the spacing between packets gets modified when packets encounter network queueing, which fluctuates with respect to time. In its role of hiding lower layer imperfections from the upper layers, it is up to the transport layer to re-establish the timing relations between packets before sending them to the application.

Specialist transport protocols such as the Real Time Protocol (RTP) (H. Schulzrinne et al. 2003) are used by flows requiring such timing information. RTP operates on top of traditional transport layer protocols such as UDP and provides each packet with a timestamp. At the receiver, packets are inserted as soon as they arrive into a special buffer known as a jitter buffer, or playout buffer. They are then extracted from the jitter buffer in the order in which they were sent and at intervals exactly equal to the interval between their timestamps. Jitter buffers can only add delay to packets, cannot remove it; if a packet is received with excessive delay, it is simply discarded by the jitter buffer. The size of the jitter buffer determines the tradeoff between the delay and packet loss experienced by the application.

TCP may itself cause delay fluctuation, both through ACK-clocking and the variations in packet rate induced by Reno-like congestion control. When transmitting video and other streaming data, it is sometimes desirable to have packets sent with more uniform spacing. The burst caused to ACK-clocking may be avoided by Paced-TCP. Rather than sending packets exactly when acknowledgments are received, paced-TCP sends one window’s worth of packets uniformly spaced throughout a round-trip time. Many congestion control algorithms have been proposed that dispense with Reno’s AIMD, reducing burstiness on longer time scales; notable examples include TCP Vegas and TCP Friendly Rate Control (TFRC).

RECENT AND FUTURE EVOLUTION

With the Internet expanding to global scale and becoming ubiquitous, it is encountering more and more new environments. The TCP/IP "hourglass model" (Brakmo and Peterson, 1995) has been very successful at separating applications from the underlying physical networks and aiding Internet’s rapid growth. On the other hand, some basic assumptions are becoming inaccurate or totally invalid, which therefore imposes new challenges. This section describes some of the hot issues in both the Internet Engineering Task Force (IETF, the primary Internet standards body) and the broad research community. Many topics touch upon both fundamental queries and implementation issues and most of the implementation issues along with the theoretical ones are discussed here, though not exhaustively. For example, many more variants of TCP congestion control are proposed in the last few years than can be surveyed within an encyclopedia. In addition to the rest of this section, there are many other exciting developments in the theory and practice of transport layer design for future networks.

Protocol Enhancement

1) Datagram Congestion Control Protocol: TCP provides reliable in-order data transfer and congestion control where as UDP provides neither. However, in applications like video transmission, though congestion control is mandatoray, transmission need not be guaranteed. Moreover, they cannot tolerate the delay caused by retransmission and in-order delivery. Consequently, the IETF has developed a new protocol called DCCP (Datagram Congestion Control Protocol), which may be viewed either as UDP with congestion control or as TCP without the reliability guarantees. As many firewalls block unknown protocols, DCCP has not yet been widely used, although it is implemented in many operating systems.

2) Multiple indicators of congestion: The current TCP NewReno relies primarily on the detection of packet loss to determine its window size. Other proposals have been made that rely primarily on estimates of the queueing delay. The utility maximization theory applies to networks in which all flows are of the same "family". For example, all flows in the network may respond solely to the loss; different flows may respond differently to loss, provided that loss is the only congestion signal they use. However, when a single network carries flows from both the "loss" and "delay" families, or flows responding to other "price" signals such as explicit congestion signals, the standard theory fails to predict how the network behaves.

Unlike networks carrying a single family of algorithms, the equilibrium rates now depend on router parameters, such as buffer sizes, and flow arrival patterns. The equilibrium may be nonunique, inefficient, and unfair. The situation is even more complicated when some individual flows respond to multiple congestion signals, such as adjusting AIMD parameters based on estimates of queueing delay. This has motivated recent efforts to construct a more general framework, which includes as a special case the theory for networks using congestion signals from a single family (Tang et al, 2007).

Applications

1) Delay-Tolerant Networks: Sliding window protocols rely on feedback from the receiver to the sender. When communicating with the spacecraft, the delay between sending and receiving may be minutes or hours, rather than milliseconds, and sliding windows become infeasible. This has resulted in the development of "Interplanetary TCP Technology" called DTN (Delay-Tolerant Networking), and also for more mundane situations in which messages suffer long delays. One example is in vehicular networks, in which messages are exchanged over short-range links as vehicles pass one another, and are physically carried by the motion of the vehicles around a city. In such networks, reliability must typically be achieved by combinations of error correcting codes and multipath delivery (e.g., through flooding).

2) Large Bandwidth Delay Product Networks: In the late 1990s, it became clear that TCP NewReno had problems in high speed transcontinental networks, commonly called "large bandwidth-delay product" or "Large-BDP" networks. The problem is especially severe when a large-BDP link carries only a few flows, such as those connecting supercomputer clusters. On these networks, an individual flow must have a window of many thousands of packets.

Because AIMD increases the window by a single packet per round trip, the sending rate on a transatlantic link will increase by around 100 kbit/s. It would thus take almost three hours for a single connection to start using the entire 1 Gbit/s link.

Many solutions have been proposed, typically involving increasing the rate at which the window is increased, or decreasing the amount by which it is decreased. However, these both make the algorithm more aggressive, which could lead to allocating too much rate to flows using these solutions and not enough to flows using the existing TCP algorithm. As a result, most solutions try to detect whether the network actually is a large-BDP network, and adjust their aggressiveness accordingly. Another possibility is to avoid dealing with packet loss in large BDP networks.

Researchers have developed various congestion control algorithms that use congestion signals other than packet loss, e.g., queueing delay. Many proposals also seek to combine timing information with loss detection. This leads to the complications of multiple indicators of congestion described previously.

A often proposed alternative is for the routers on the congested links to send explicit messages indicating the level of congestion. This was an important part of the available bit-rate (ABR) service of asynchronous transport mode (ATM) networks. It may allow more rapid and precise control of rate allocation, such as the elimination of TCP’s time-consuming slow start phase. However, it presents significant difficulties for incremental deployment in the current Internet.

3) Wireless networks: Wireless links are less ideal than wired links in many ways. Most importantly, they corrupt packets because of fading and interference, either causing long delays, as lower layers try to recover the packets, or causing packets to be lost. The first of these results in unnecessary timeouts, forcing TCP to undergo slow start, where as the latter is mistaken for congestion and causes TCP NewReno to reduce its window. Again, many solutions have been proposed. Some masks the existence of loss, whereas others attempt to distinguish wireless loss from congestion loss based on estimates of queueing delay or explicit congestion indication.

The fundamental task of resource allocation is also more challenging in wireless networks, partly because resources are more scarce and users may move, but more importantly because of the interaction between nearby wireless links. Because the capacity of a wireless link depends on the strength of its signal and that of interfering links, it is possible to optimize resource allocation over multiple layers in the protocol stack. This cross-layer optimization generalizes the utility maximization. It provides challenges as well as opportunities to achieve even greater performance, which requires a careful balance between reducing complexity and seeking optimality.

Research Challenges

1) The impact of network topology: Transport layer congestion control and rate allocation algorithms are often studied in very simple settings. Two common test networks are Dumbbell Networks in which many flows share a single congested link, and Parking Lot Networks, consisting of several congested links in series with one flow, traversing all links, and each link also being the sole congested link for another short flow.

Figure 1.8: A Two-link Network Shared by Three Flows

Figure 1.8 shows a two-link parking lot network. These are used partly because they occur frequently in the Internet (such as when a flow is bottlenecked at the ingress and egress access links), and partly because there are intuitive notions of how algorithms "should" behave in these settings. However, these simple topologies often give a misleading sense of confidence in one’s intuition. For example, in parking lot topologies, algorithms that give a high rate to the single link flows at the expense of the multilink flow achieve higher total throughput, and thus it is widely believed that there is a universal tradeoff between fairness and efficiency. However, networks exist in which increasing the fairness actually increases the efficiency (Tang et al, 2006). This and other interesting and counterintuitive phenomena develop only in a network setting where sources interact through shared links in intricate and surprising ways.

2) Stochastic Network Dynamics: The number of flows sharing a network is continually changing, as new application sessions start, and others finish. Furthermore, packet accumulations at each router is shaped by events in all upstream routers and links, and packet arrivals in each session are shaped by the application layer protocols, including those in emerging multimedia and content distribution protocols. Although it is easy to study the effects of this variation by measuring either real or simulated networks, it is much harder to capture these effects in theoretical models. Although the deterministic models studied to date have been very fruitful in providing a fundamental understanding of issues such as fairness, there is an increasing interest in extending the theoretical models to capture the stochastic dynamics occurring in real networks. As an example of one such dynamics, consider a simple case of one long flow using the entire capacity of a given link, and another short flow that starts up using the same link. If the short flow finishes before the long flow does, then the finish time of the long flow will be delayed by the size of the short flow divided by the link capacity, independent of the rate allocated to the short flow, provided that the sum of their rates has been always the link capacity. In this case, it would be optimal to process the flows in "Shortest Remaining Processing Time First" (SRPT) order; that is, to allocate all rate to the short flow and meanwhile totally suspend the long flow. However, as the network does not know in advance that the short flow will finish first, it will instead seek to allocate rates fairly between the two flows. This can cause the number of simultaneous flows to be much larger than the minimum possible, resulting in each flow getting a lower average rate than necessary. The fundamental difficulty is that the optimal strategy is no longer available to allocate instantaneous rates fairly based on the existing flows.

Problem Specification

The performance of the congestion control algorithms of most of the reliable transport protocols of the internet, particularly wireless sensor networks (WSN) and mobile ad-hoc networks, are not satisfactory under a high density sensor network applications, since those algorithms were designed mainly for wired networks and not for WSN.

To assure a data packet to be delivered to the destination reliably, a transport layer protocol must be embedded between the application layer and network layer. High data rate applications involve voluminous data transfer and may result in persistent congestion leading to the inevitable loss of data (Y. G. Iyer et al. 2005). In such high rate sensor network applications a fairly reliable solution is mandatory to avoid congestion and to maintain complete and efficient data transfer between many sources and one or more sinks.

Motivation

A typical wireless sensor network is highly unstable as it is error-prone due to various reasons such as interference of radio signal, radio channel contention, and survival rate of nodes (Yao-Nan Lien, 2009). It is obvious that the capabilities of wireless sensor nodes are limited when compared with their fixed network counterparts, due to various reasons (Jang-Ping Sheu et al. 2009). The comparatively new protocol DCCP having interesting properties, which makes it possible to use it in an error-prone sensor network scenario.

Research Contribution

This research work presents three proposed approaches for handling the issues such as congestion control, managing reliable data transmission between a source and destination.

The proposed approaches used in this dissertation are:

The Reset Frequency Controlled Parameter Re-estimation for the Improvement of Congestion Control in DCCP_TCPlike

The Reset Frequency Controlled Parameter Re-estimation for the Improvement of Congestion Control in DCCP_TFRC

An Analysis on the improved Reset Frequency Controlled Parameter Re-estimation method in DCCP

ORGANIZATION OF THESIS

Chapter 1 provides an overview of the Wireless Sensor Networks (WSNs). The sensor nodes, its applications, major issues factors and its importance in the present scenario are discussed in detail. The chapter discusses about the Transport layer in detail and the protocols used in the transport layer. The congestion control issues are clearly discussed in this chapter and protocols used in the congestion control technique are also clearly discussed.

Chapter 2, "Literature Survey" discusses about the existing transport layer protocols and its performance in congestion control. This chapter analysis and examines the existing protocols. The inference from the existing techniques and its limitations are also discussed in this chapter.

Chapter 3 deals with the detailed explanation of the first proposed methodology namely, "Reset Frequency Controlled Parameter Re-estimation for the Improvement of Congestion Control in DCCP_TCPlike" which overcomes the drawbacks of a existing protocol used for the congestion control. The formulation and modeling of the proposed algorithm are clearly discussed in this chapter.

Chapter 4 discusses about the second proposed methodology namely, "Reset Frequency Controlled Parameter Re-estimation for the Improvement of Congestion Control in DCCP_TFRC". The advantage of this approach is clearly stated in this work. The algorithm and formulation of the proposed approach is discussed in detail.

Chapter 5 discusses about the third proposed methodology namely, "An Analysis on the improved Reset Frequency Controlled Parameter Re-estimation method in DCCP".

Chapter 6 discusses about the experimental results and comparison of the all the three proposed approaches with the existing techniques. The experimental evaluation is based on various experimental parameters.

Chapter 7 concludes the thesis with the findings for proposed approaches based on certain performance parametric standards. This chapter also discusses about the scope for future improvement.

1.9 SUMMARY

This chapter discusses the fundamentals of sensor networks and the factors influencing the design of such networks. The sensor network communication architecture with a through analysis of transport layer have also been covered. Concepts of reliable transmission by controllong congestion in sensor networks with the recent advancements is also dealt with. A brief overview of the proposed approaches is also given in this chapter.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now