28 Feb 2018
A computer network is a connection of two or more computers in order to share resources and data. These shared resources can include devices like printers and other resources like electronic mail, internet access, and file sharing. A computer network can also be seen as a collection of Personal computers and other related devices which are connected together, either with cables or wirelessly, so that they can share information and communicate with one another. Computer networks vary in size. Some networks are needed for areas within a single office, while others are vast or even span the globe.
Network management has grown as a career that requires specialized training, and comes with management of important responsibilities, thus creating future opportunities for employment. The resulting expected increase in opportunities should be a determining and persuasive factor for graduates to consider going into network management.
Computer networking is a discipline of engineering that involves communication between various computer devices and systems. In computer networking, protocols, routers, routing, and networking across the public internet have specifications that are defined in RFC documents. Computer networking can be seen as a sub-category of computer science, telecommunications, IT and/or computer engineering. Computer networks also depend largely upon the practical and theoretical applications of these engineering and scientific disciplines.
In the vastly technological environment of today, most organisations have some kind of network that is used every day. It is essential that the day-to-day operations in such a company or organisation are carried out on a network that runs smoothly. Most companies employ a network administrator or manager to oversee this very important aspect of the company’s business. This is a significant position, as it comes with great responsibilities because an organisation will experience significant operational losses if problems arise within its network.
Computer networking also involves the setting up of any set of computers or computer devices and enabling them to exchange information and data. Some examples of computer networks include:
Networks involve interconnection to allow communication with a variety of different kinds of media, including twisted-pair copper wire cable, coaxial cable, optical fiber, and various wireless technologies. The devices can be separated by a few meters (e.g. via Bluetooth) or nearly unlimited distances (e.g. via the interconnections of the Internet. (http://en.wikipedia.org/wiki/Computer_networking)
Every application, whether it is a small or large application, should perform adaptive congestion control because applications that perform congestion control use a network more efficiently and are generally of better performance.
Congestion control algorithms prevent the network from entering Congestive Collapse. Congestive Collapse is a situation where, although the network links are being heavily utilized, very little useful work is being done. The network will soon begin to require applications to perform congestion control, and those applications which do not perform congestion control will be harshly penalized by the network, probably in the form of preferentially dropping their packets during times of congestion (http://www.psc.edu/networking/projects/tcpfriendly/)
Informally, congestion entails that too many sources are sending too much data, and sending them too fast for the network to handle. TCP Congestion Control is not the same as flow control, as there are several differences between TCP Congestion Control and flow control. Other principles of congestion control include Global versus point-2-point, and orthogonal issues.
Congestion manifests itself by causing loss of packets (buffer overflow at routers), and long delays (queuing in router buffers). Also, during congestion, there is no explicit feedback from network routers, and there is congestion inferred from end-system observed loss. In network-assisted congestion control, routers provide feedback to end systems, and the explicit rate sender sends at –Choke Packet. Below are some other characteristics and principles of congestion control:
It is necessary for the TCP sender to use congestion avoidance and slow start algorithms in controlling the amount of outstanding data that is injected into a network.
In order to implement these algorithms, two variables are added to the TCP per-connection state. The congestion window (cwnd) is a sender-side limit on the amount of data the sender can transmit into the network before receiving an acknowledgment (ACK), while the receiver's advertised window (rwnd) is a receiver-side limit on the amount of outstanding data. The minimum of cwnd and rwnd governs data transmission. (Stevens, W. and Allman, M. 1998)
In TCP flow control, the receiving side of the TCP connection possesses a receive buffer, and a speed-matching service which matches the send rate to the receiving application’s drain rate. During flow control, Rcvr advertises any spare room by including value of RcvWindow in segments, and the sender limits unACKed data to RcvWindow. TCP flow control also ensures that there is no overflow of the receive buffer.
TCP Round Trip Time and Timeout are usually longer than RTT, but RTT varies, and has a slow reaction to segment loss. SampleRTT is measured time from segment transmission until ACK receipt, ignore retransmissions, and will vary, want estimated RTT “smoother”
Round-trip time samples arrive with new ACKs. The RTT sample is computed as the difference between the current time and a ``time echo'' field in the ACK packet. When the first sample is taken, its value is used as the initial value for srtt. Half the first sample is used as the initial value for rttvar. (Round-Trip Time Estimation and RTO Timeout Selection)
There are often problems due to timeouts, including the restriction of the sender that is compelled to wait until a timeout, and is able to do nothing during this period. Also, the first segment in the sliding window is often not acked, and retransmission becomes necessary, waiting again one RTT before the segment flow continues. It should be noted that on receiving the later segments, the receiver sends back ACKs.
EstimatedRTT = 0.875 * EstimatedRTT + 0.125 * SampleRTT
DevRTT = (1 - 0.25) * DevRTT + | SampleRTT – EstimatedRTT
TimeoutInterval = EstimatedRTT + 4 * DevRTT
The integrated services (IntServ) and DiffServ (Differentiated Services) architecture are two architectures that have been proposed for the provision of and guaranteeing of quality of service (QoS) over the internet. Whereas the Intserv framework is developed within the IETF to provide individualized QoS guarantees to individual application sessions, Diffserv is geared towards enabling the handling of different classes of traffic in various ways on the internet. These two architectures represent the IETF’s current standards for provision of QoS guarantees, although neither Intserv nor Diffserv have taken off or found widespread acceptance on the web.
(a) Integrated Service Architecture
In computer networking, the integrated services (IntServ) architecture is an architecture that specifies the elements for the guaranteeing of quality of service (QoS) on the network. For instance, IntServ can be used to allow sound and video to be sent over a network to the receiver without getting interrupted. IntServ specifies a fine-grained Quality of service system, in contrast to DiffServ's coarse-grained system of control.
In the IntServ architecture, the idea is that each router inside a system implements IntServ, and applications which require various types of guarantees have to make individual reservations. "Flow Specs" are used to describe the purpose of the reservation, and the underlying mechanism that signals it across the network is called "RSVP".
TSPECs include token bucket algorithm parameters. The idea is that there is a token bucket which slowly fills up with tokens, arriving at a constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how large the traffic is allowed to be. TSPECs typically just specify the token rate and the bucket depth.
For example, a video with a refresh rate of 75 frames per second, with each frame taking 10 packets, might specify a token rate of 750Hz, and a bucket depth of only 10. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth.
This is because there are often pauses in conversations, so they can make do with fewer tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs to be increased to compensate for the traffic being larger. (http://en.wikipedia.org/wiki/Integrated_services)
(b) Differentiated Service Architecture
The RFC 2475 (An Architecture for Differentiated Services) was published In 1998, by the IETF. Presently, DiffServ has widely replaced other Layer 3 Quality of Service mechanisms (such as IntServ), as the basic protocol that routers use to provide different service levels.
DiffServ (Differentiated Services) architecture is a computer networking architecture which specifies a scalable, less complex, coarse-grained mechanism for the classification, management of network traffic and for provision of QoS (Quality of Service) guarantees on modern IP networks. For instance, DiffServ can be used for providing low-latency, guaranteed service (GS) to video, voice or other critical network traffic, while ensuring simple best-effort traffic guarantees to non-critical network services like file transfers and web traffic.
Most of the proposed Quality of Service mechanisms which allowed these services to co-exist were complicated and did not adequately meet the demands Internet users because modern data networks carry various kinds of services like streaming music, video, voice, email and also web pages.
It would probably be difficult to implement Intserv in the core of the internet because most of the communication between computers connected to the Internet is based on a client/server structural design. This Client/server describes a structure involving the connection of one computer to another for the purpose of giving work instructions or asking it questions. In an arrangement like this, the particular computer that questions and gives out instructions is the client, while the computer that provides answers to the asked questions and responds to the work instructions is the server.
The same terms are used to describe the software programs that facilitate the asking and answering. A client application, for instance, presents an on-screen interface for the user to work with at the client computer; the server application "welcomes" the client and knows how to respond correctly to the clients commands. Any file server or PC can be adapted for use as an Internet server, however a dedicated computer should be chosen.
Anyone with a computer and modem can join this network by using a standard phone. Dedicating the server that is, using a computer as a server only helps avoid some security and basic problems that result from sharing the functions of the server. To gain access to the Internet you will require an engineer to install the broadband modem. Then you will be able to use the server to network the Internet on all machines on a network. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf)
These days, computers are used for everything from shopping and communication to banking and investment. Intruders into a network system (or hackers) do not care about the privacy or identity of network users. Their aim is to gain control of computers on the network so that they can use these systems to launch attacks on other computer systems.
Therefore people who use the network for these purposes must be protected from unknown strangers who try to read their sensitive documents, or use their computer to attack other systems, and send forged email, or access their personal information (such as their bank or other financial statements)
The International Organisation for Standardization's (ISO's) 17799: 2005 Standard is a code of practice for information security management which provides a broad, non-technical framework for establishing efficient IT controls. The ISO 17799 Standard consists of 11 clauses that are divided into one or more security categories for a total of 39 security categories
The security clauses of the ISO standard 17799:2005- code of practice for Information Security Management include:
Here is a brief description of the more recent version of these security clauses:
Security Policy: Security policies are the foundation of the security framework and provide direction and information on the company's security posture. This clause states that support for information security should be done in accordance with the company's security policy.
Organizing Information Security: This clause addresses the establishment and organizational structure of the security program, including the appropriate management framework for security policy, how information assets should be secured from third parties, and how information security is maintained when processing is outsourced.
Asset Management: This clause describes best practices for classifying and protecting assets, including data, software, hardware, and utilities. The clause also provides information on how to classify data, how data should be handled, and how to protect data assets adequately.
Human Resources Security: This clause describes best practices for personnel management, including hiring practices, termination procedures, employee training on security controls, dissemination of security policies, and use of incident response procedures.
Physical and Environmental Security: As the name implies, this clause addresses the different physical and environmental aspects of security, including best practices organizations can use to mitigate service interruptions, prevent unauthorized physical access, or minimize theft of corporate resources.
Communications and Operations: This clause discusses the requirements pertaining to the management and operation of systems and electronic information. Examples of controls to audit in this area include system planning, network management, and e-mail and e-commerce security.
Access Control: This security clause describes how access to corporate assets should be managed, including access to digital and nondigital information, as well as network resources.
Information Systems Acquisitions, Development, and Maintenance: This section discusses the development of IT systems, including applications created by third-parties, and how security should be incorporated during the development phase.
Information Security Incident Management: This clause identifies best practices for communicating information security issues and weaknesses, such as reporting and escalation procedures. Once established, auditors can review existing controls to determine if the company has adequate procedures in place to handle security incidents.
Business Continuity Management: The 10th security clause provides information on disaster recovery and business continuity planning. Actions auditors should review include how plans are developed, maintained, tested, and validated, and whether or not the plans address critical business operation components.
Compliance: The final clause provides valuable information auditors can use when identifying the compliance level of systems and controls with internal security policies, industry-specific regulations, and government legislation.
(Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209&iid=467)
The standard, which was updated in June 2005 to reflect changes in the field of information security, provides a high-level view of information security from different angles and a comprehensive set of information security best practices. More specifically, ISO 17799 is designed for companies that wish to develop effective information security management practices and enhance their IT security efforts.
The ISO 17799 Standard contains 11 clauses which are split into security categories, with each category having a clear control objective. There are a total of 39 security categories in the standard. The control objectives in the clauses are designed to meet the risk assessment requirements and they can serve as a practical guideline or common basis for development of effective security management practices and organisational security standards.
Therefore, if a company is compliant with the ISO/IEC 17799 Standard, it will most likely meet IT management requirements found in other laws and regulations. However, because different standards strive for different overall objectives, auditors should point out that compliance with 17799 alone will not meet all of the requirements needed for compliance with other laws and regulations. Establishing an ISO/IEC 17799 compliance program could enhance a company's information security controls and IT environment greatly.
Conducting an audit evaluation of the standard provides organizations with a quick snapshot of the security infrastructure. Based on this snapshot, senior managers can obtain a high-level view of how well information security is being implemented across the IT environment. In fact, the evaluation can highlight gaps present in security controls and identify areas for improvement.
In addition, organizations looking to enhance their IT and security controls could keep in mind other ISO standards, especially current and future standards from the 27000 series, which the ISO has set aside for guidance on security best practices. (Edmead, M. T. 2006 retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209&iid=467)
Tree topologies bind multiple star topologies together onto a bus. In its most simple form, only hub devices are directly connected to the tree bus and the hubs function as the root of the device tree.
This bus/star hybrid approach supports future expandability of the network much better than a bus (limited in the number of devices due to the broadcast traffic it generates) or a star (limited by the number of hub ports) alone. Topologies remain an important part of network design theory. It is very simple to build a home or small business network without understanding the difference between a bus design and a star design, but understanding the concepts behind these gives you a deeper understanding of important elements like hubs, broadcasts, ports, and routes. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf)
Use of the ring topology should be considered for use in medium sized companies, and the ring topology would also be the best topology for small companies because it is ensures ease of data transfer.
In a ring network, there are two neighbors for each device, so as to enable communication. Messages are passed in the same direction, through a ring which is effectively either "counterclockwise" or "clockwise". If any cable or device fails, this will break the loop and could disable the entire network.
Bus networks utilize a common backbone to connect various devices. This backbone, which is a single cable, functions as a shared medium of communication which the devices tap into or attach to, with an interface connector.
A device wanting to communicate with another device on the network sends a broadcast message onto the wire that all other devices see, but only the intended recipient actually accepts and processes the message. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf)
The star topology is used in a lot of home networks. A star network consists of a central connection point or "hub" that can be in the form of an actual hub, or a switch. Usually, devices will connect to the switch or hub by an Unshielded Twisted Pair (UTP) Ethernet.
Compared to the bus topology, a star network generally requires more cable, but a failure in any star network cable will only take down one computer's network access and not the entire LAN. If the hub fails, however, the entire network also fails. (www.redbooks.ibm.com/redbooks/pdfs/sg246380.pdf)
In an organisation like the Nurht’s Institute of Information Technology (NIIT), the above mentioned security clauses and control objectives provide a high-level view of information security from different angles and a comprehensive set of information best security practices. Also, the ISO 17799 is designed for companies like NIIT, which aim to enhance their IT security, and to develop effective information security management practices.
At NIIT, the local network relies to a considerable degree, on the correct implementation of these security practices and other algorithms so as to avoid congestion collapse, and preserve network stability. An attacker or hacker on the network can cause TCP endpoints to react in a more aggressive way in the face of congestion, by the forging of excessive data acknowledgments, or excess duplicate acknowledgments. Such an attack could possibly cause a portion of the network to go into congestion collapse.
The Security Policy clause states that “support for information security should be done in accordance with the company's security policy.” (Edmead, M. T. 2006). This provides a foundation of the security framework at NIIT, and also provides information and direction on the organisation’s security posture. For instance, this clause helps the company auditors to determine whether the security policy of the company is properly maintained, and also if indeed it is to be disseminated to every employee.
The Organizing Information Security clause stipulates that there should be appropriate management framework for the organisation’s security policy. This takes care of the organizational structure of NIIT’s security program, including the right security policy management framework, the securing of information assets from third parties, and the maintenance of information security during outsourced processing.
At NIIT, the Security clauses and control objectives define the company’s stand on security and also help to identify the vital areas considered when implementing IT controls. The ISO/IEC 17799's 11 security clauses enable NIIT to accomplish its security objectives by providing a comprehensive set of information security best practices for the company to utilize for enhancement of its IT infrastructure.
Different businesses require different computer networks, because the type of network utilized in an organisation must be suitable for the organisation. It is advisable for smaller businesses to use the LAN type of network because it is more reliable. The WAN and MAN would be ideal for larger companies, but if an organisation decides to expand, they can then change the type of network they have in use. If an organisation decides to go international, then a Wireless Area Network can be very useful
Also, small companies should endeavor to set up their network by using a client/server approach. This would help the company to be more secure and enable them to keep in touch with the activities of others are doing. The client/server would be much better than a peer-to-peer network, it would be more cost-effective.
On the average, most organisations have to spend a good amount of money and resources to procure and maintain a reliable and successful network that will be and easy to maintain in the long run.
For TCP Congestion Control, when CongWin is below Threshold, sender in slow-start phase, window grows exponentially. If CongWin is above Threshold, sender is in congestion-avoidance phase, window grows linearly. When a triple duplicate ACK occurs, Threshold set to CongWin/2 and CongWin set to Threshold, and threshold set to CongWin/2 and CongWin is set to 1 MSS when a timeout occurs.
For a Small Office/Home Office (SOHO), networks such as wireless networks are very suitable. In such a network, there won’t be any need to run wires through walls and under carpets for connectivity.
The SOHO user need not worry about plugging their laptop into docking stations every time they come into the office or fumble for clumsy and unattractive network cabling. Wireless networking provides connectivity without the hassle and cost of wiring and expensive docking stations. Also, as the business or home office grows or shrinks, the need for wiring new computers to the network is nonexistent. If the business moves, the network is ready for use as soon as the computers are moved. For the wired impossible networks such as those that might be found in warehouses, wireless will always be the only attractive alternative. As wireless speeds increase, these users have only brighter days in their future. (http://www.nextstep.ir/network.shtml)
It is essential to note that the computer network installed in an organisation represents more than just a simple change in the method by which employees communicate. The impact of a particular computer network may dramatically affect the way employees in an organisation work and also affect the way they think.
Business Editors & High-Tech Writers. (2003, July 22). International VoIP Council
Launches Fax-Over-IP Working Group. Business Wire. Retrieved July 28,
2003 from ProQuest database.
Career Directions (2001 October). Tech Directions, 61(3), 28 Retrieved July 21, 2003
from EBSCOhost database
Edmead, M. T. (2006) Are You Familiar with the Most Recent ISO/IEC 17799 Changes?
(Retrieved from http://www.theiia.org/ITAuditArchive/?aid=2209&iid=467)
FitzGerald, J. (1999), Business Data Communications And Networking Pub: John Wiley
Forouzan, B. (1998), Introduction To Data Communications And Networking Pub: Mc-
ISO/IEC 17799:2000 – Code of practice for information security management -
Published by ISO and the British Standards Institute [http://www.iso.org/]
ISO/IEC 17799:2005, Information technology – Security techniques – Code of
practice for information security management. Published by ISO
Kurose, J. F. & Ross, K. W. 2002. Computer Networking - A Top-Down Approach
Featuring the Internet, 2nd Edition, ISBN: 0-321-17644-8 (the international
edition), ISBN: 0-201-97699-4, published by Addison-Wesley, 2002
Ming, D. & R. Sudama (1992) NETWORK MONITORING EXPLAINED: DESIGN
AND APPLICATION Pub: Ellis Horwood
Rigney, S. (1995) NETWORK PLANNING AND MANAGMENT YOUR PERSONAL
Round-Trip Time Estimation and RTO Timeout Selection (retrieved from
Shafer, M. (2001, June 11). Careers not so secure? Network Computing, 12(12), 130-
Stevens, W. and Allman, M. (1998) TCP Implementation Working Group (retrieved from
Watson, S (2002). The Network Troubleshooters. Computerworld 36(38), 54. (Retrieved July 21, 2003 from EBSCOhost database)
Wesley, A. (2000), Internet Users Guide to Network Resource Tools 1st Ed, Pub:
If you are the real writer of this dissertation and no longer want to have this dissertation published on the our website then please click on the link below to send us request removal:Request the removal of this dissertation
Get in touch with our dedicated team to discuss about your requirements in detail. We are here to help you our best in any way. If you are unsure about what you exactly need, please complete the short enquiry form below and we will get back to you with quote as soon as possible.