A Full Virtualisation Using Binary Translation

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

University of Greenwich

Introduction to VMware

Years ago, data centers were crammed with physical servers basically doing nothing or at most they were running less than 15% of their required capacity. Since Businesses generally are always looking to cut the total costs, this was considered a waste of precious resources, i.e. money, power, maintenance contracts and even space. The use of virtualisation becomes ideal.

For the past few years, the concept of Virtualisation has been an ever increasing phenomenon in the IT world. Numerous retailers came up with their own methodology, terms, and array of products related to this specific technology. Within this technology, it is however, crucial to realise that there are a number of different types of virtualisation, each with different arrangement and objectives.

Literature review

In the early twenty-first century, IT managers began to realise that old-fashioned methods of running desktop and laptop systems were becoming in-effective in dealing with the increase changes in business requirements, end-user demands regarding implementations of technology, and hacking thought to be just for fun, now turning to an organised crime. Given these challenges, it is not unexpected that enterprises are increasingly employing new technologies to meet these increasing difficulties.

In the late 90’s, a company called VMware released VMware Workstation designed to run many Operating Systems at the same time on a Personal Computer. In mid-2001 this company then release two servers types called VMware GSX server (This requires a host OS to run), the name was later changed to VMware ESX server which had its own VMKernel, referred to as the "hypervisor" and was directly ran on the hardware. Since the first release, there have been different upgrades such as the ESX server 2.0 and ESXi 3.5. There is now the latest version VMware ESXi server 4.1.

According to Kishor, Chief Consulting Officer of ETCO INDIA, Virtualisation has now become the buzz word once future solutions for IT facilitating of businesses are been discussed. Many companies have already started implementing the concept of virtual servers in their data centres. As a matter of fact, in 2012, in a Gartner report titled "Key Challenges in Cloud Computing they stated that more than 50% of all data workloads are virtualised.

With this technology around, some companies have started to do research to conclude whether the concept of virtualisation are profitable to the business based on the excitement of this technology. Furthermore since the focus of businesses is on cost reduction as well as improvement in productivity and performance. Visualisation would be a step on the right direction but according to a Gartner report, it warned about many negative effects of virtualisation if the company strategies, performance goals and information security objectives are not incorporated in the architectural design by the solution providers. In the consulting assignments of ETCO India, kishor realised that business stake holders are very sceptical in accepting virtualisation to host their business critical solutions due to absence of proven track records and absence of empirical generalisations in the academic world. Therefore this should be further researched as this is a very enormous area for academic research.

Methodology

This chapter provides an overview of the research design (i.e. the case study) used for research about VMware. The research for the report is mainly secondary in nature, which has been conducted by making use of existing data. The major information about the subject has been extracted from literature primarily sought from articles, journal and textbooks. A qualitative research approach has been selected for conducting the study. However, the core reason behind selecting the qualitative research was that the research requires subjectivity and the vast information which can only be obtained through the qualitative approach that is merely subjective in nature when compared with quantitative research and makes ostensible use of contrasting methods of data collection. As stated previously, this report shall be using the secondary methods for data collection. The primary criteria for selecting the literature were based on the relevance of the issue and the year in which the study was published. Private and public libraries, in addition to online, were accessed in order to gather the pertinent information. The online databases used were Emerald, EBSCO and Phoenix. Some of the search terms used were "VMware Importance", "Security in VMware" "How VMware can increase ROI in a company".

Findings

As stated previously, this report used the secondary methods for data collection. After going through some previous literature in regards to virtualisation, it was evident that virtualisation in general has its merits and limitations and this shall be discussed in the next chapter.

Discussion

What is Virtualisation

Virtualisation aaccording to Popek and Goldberg, 1974, is "A virtual machine that is taken to be an efficient, isolated duplicate of the real machine," .[1] Virtualisation has been studied since the early days of computing and as defined in the preceding chapters, it can be summarised as a form of technology that allows multiple operating system to run simultaneously on a single computer. It emerged as a means to more fully utilise hardware resources and facilitate time-sharing systems.

http://www.novell.com/communities/files/img/11383-1.jpg

Figure

After Virtualisation: Multiple OS sharing hardware resources

Without Virtualisation: Single OS owns all hardware resources

Because the virtualisation system is situated between the "guest" and physical Host Hardware, it can basically control the hosts’ use of CPU, memory, and storage, even permitting a host OS to migrate from one device to another. Using a specially developed software, an IT administrator can configure one physical device into several virtual machines and each virtual device then acts like a distinctive physical device, capable of running its own operating system (OS).

In the following, chapters, this document shall discuss the different types of Virtualisation, the key benefits using this specific type of technology and its limitations. In the subsequent part of this report, some of VMware’s solutions will be presented to the various challenges posed by the management of virtualised datacenters.

Types of Virtualisation

Full Virtualisation using Binary Translation

Any x86 operating system can be virtualised using a combination of binary translation and direct execution technique. This combination of binary translation and direct execution provides Full Virtualisation because the guest OS is completely separated from the underlying hardware by the virtualisation layer as shown in figure 4 below.

Hypervisorhttp://clearwaterthoughts.files.wordpress.com/2011/05/2-virtual-server-rings-and-levels.jpg

Figure 4 The Binary Translation approach to x86 virtualisation

This type of virtualisation is the only one that requires no operating system or hardware assist to virtualised sensitive and privileged instructions. Because it uses a software known as hypervisor. This hypervisor then converts all operating system instructions on the fly and stores the results to be used in the future, while user level instructions run unmodified at natural speed. Full virtualisation offers the best seclusion and security for virtual machines, and streamlines migration and manageability as the same "guest" OS instance can run virtualised on native hardware. VMware’s virtualisation products and Microsoft Virtual Server are typical examples of full virtualisation.

Para-Virtualisation

This approach is a bit different from the full virtualisation procedure described above, it refers to the communication between the guest OS and the hypervisor aimed at improving performance and efficiency.

https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQ2CICTdTpsZ2sPH4Lto-ITGj4U4cr1T4agN1VHAeSS3tC6xFgoNQ

Figure 5 Para-virtualisation

Paravirtualisation, as shown in Figure 5, involves the modification of the OS kernel to replace non-virtualisable commands with hypercalls that communicate directly with the virtualisation layer hypervisor.

In this type of virtualisation, each virtual device is aware of one another. A hypervisor in this type of virtualisation doesn't need that much of a processing power to manage the "guest" operating systems, because each OS is knows already of the load the other operating systems are placing on the physical device. As a cohesive unit, the whole system works together.

OS-level Virtualisation

Hardware

Host Operating System

Virtual server

Virtual server

Virtual server

Single Kernel

Figure OS-level Virtualisation implementation

OS-level Virtualisation approach is based on the chroot concept of the Unix-based operating systems. It doesn't use a hypervisor. As an alternative, the capability is part of the operating system of the host, which carry outs all the functions of a fully virtualised hypervisor. The major drawback of this method is that all the guest devices must run a similar OS and A kernel problem can cripple all the virtual machines.

Which method is best?

The choice entirely depends on the network administrator’s requirements. If for example, the administrator’s physical servers all run on the same operating system, then an OS-level approach might be the better choice. On the other hand, if the servers are running on several different operating systems, para-virtualisation might be a better choice. One would-be drawback for para-Virtualisation systems is support — the method is quite new and only few firms offer para-virtualisation software. In general, majority of companies support full Virtualisation, but interest in para-Virtualisation is rising and may possibly replace full Virtualisation in the near future.

Limitations of Virtualisation

The profit of server virtualisation are so desirable that it’s quite easy to overlook its limitations. Research was carried out to discover the drawbacks. This part of this report shall list some of the limitations associated with virtualisation.

For devices configured strictly for applications with high demands on processing power, virtualisation is not an optimum choice. For example for servers dedicated to running applications with high demands on processing power, creating too many virtual servers on a single physical machine will be unwise as this will overload the server's CPU

Access to I/O resources: I/O devices like printers are shared between all the hosts. Therefore, if one virtual machine is logging onto the device, other virtual machines are held in a queue or may sometimes denied access.

Migration is another limitation because at the moment, it's only possible to transfer a virtual server from one physical machine to another if both physical devices use the same manufacturer's processor.

Restricted volume of disk space: Excessive amount of virtual servers could have an impact on a server’s ability to store data.

The issue of reliability comes into questions because if a company’s vital data are stored a Virtual servers and the physical server goes down, there will be no access to those data.

Conclusion

Research carried out has shown that the concept of Virtual machines is not new one, it has been around for years and it allows several users to safely share expensive machines. This is an idea that few people knew about or even understood and as computers became cheap, the motivation behind the concept of virtualisation decreased. The founders of VMware then felt it was better to bring back the virtual machine concept due to problems IT Managers were facing in regards to the rapid increase in the deployment of servers for example and the need to run multiple applications in some operating systems became a serious issue. Vitualisation as claimed, can help IT Managers spend less time on repetitive jobs, enabling them be quicker to respond to business needs and it helps businesses reduce the cost and complexity of business. Nevertheless, virtualisation in general can be technically challenging and may be the cause of significant operational disruption. Companies considering the concept of virtualisation would succeed when with a partner with vast experience in virtualisation technologies in other to address the limitation associated with this technology. There is no doubt that Virtualisation can intensely reduce IT costs while significantly improving efficiency but there are some limitation and further research is needed to fully derived an overall understanding of this new technology within the IT environment.

Bibliography of VMware

Books

Popek, G. J., Goldberg, R. P. 1974. Formal requirements for virtualizable third-generation architectures. Communications of the ACM 17(7): 412-421.

Fred Douglis , Deepti Bhardwaj , Hangwei Qian , Philip Shilane, Content-aware load balancing for distributed backup, Proceedings of the 25th international conference on Large Installation System Administration, p.13-13, December 04-09, 2011, Boston, MA

URLs

VMware Inc, 2006. "Virtualisation Overview"

Orran Krieger , Phil McGachey , Arkady Kanevsky, Enabling a marketplace of clouds: VMware's vCloud director, ACM SIGOPS Operating Systems Review, v.44 n.4, December 2010

VMware Inc. (2006). Virtualisation Overview. Califonia: VMware Inc.

http://www.scribd.com/doc/28332572/Virtualisation-PPT

http://ezinearticles.com/?Academic-Research-on-New-Challenges-in-IT-Systems-and-Networking&id=5109370

http://www.etcoindia.net/modernitsystemstopics.html

http://www.datadisk.co.uk/html_docs/vmware/introduction.htm

http://networksandservers.blogspot.com/2011/11/full-Virtualisation-explained.html

http://www.articlesbase.com/information-technology-articles/recommendations-on-academic-topics-for-dissertations-and-thesis-projects-pertaining-to-modern-challenges-in-it-infrastructure-and-systems-3353121.html

http://www.yoyoclouds.com/2012/05/how-server-Virtualisation-works.html

http://www.scribd.com/doc/37170624/Understanding-Full-Virtualisation-Para-Virtualisation-and-Hardware-Assist

http://pubs.vmware.com/vsphere-50/topic/com.vmware.vsphere.introduction.doc_50/GUID-7EE617A2-4A10-424F-BAE2-56CA6692A93F.html

http://www.anandtech.com/show/2480/8

http://www.howstuffworks.com/server-Virtualisation.htm

http://networksandservers.blogspot.com/2011/11/para-is-english-affix-of-greek-origin.html

http://pubs.vmware.com/vsphere-4-esx-vcenter/topic/com.vmware.vsphere.intro.doc_41/c_vmware_infrastructure_introduction.html

http://www.dc.uba.ar/events/eci/2008/courses/n2/Virtualisation-Introduction.ppt

http://www.edn.com/design/systems-design/4398677/1/Memory-Hierarchy-Design---Part-4--Virtual-memory-and-virtual-machines

http://www.scribd.com/doc/31339214/2/Packet-Filtering-Example

http://www.gartner.com/technology/topics/cloud-computing.jsp

VMware (2007b). Understanding Full Virtualization, Paravirtualization, and Hardware Assist. Retrieved March 01, 2009, from www.vmware.com/files/pdf/VMware_paravirtualization.pdf.

Von Hagen, W. (2008). Professional Xen Virtualization. Indianapolis: Wiley Publishing, Inc.

Introduction ACL on Cisco equipment

Firewall is a vital method that can help increase network security. Nevertheless, the security levels do not

Rest on this of firewall but the secure rules within it. While learning about firewall configuration, it is important to focus on creating accuracy and non-conflict rules set. A simple firewall such as Cisco ACL, should be the first step before studying on the other complex firewalls. This report, therefore, shall deal with Cisco ACL.

Objectives

The objectives of this part of the report is to:

Define and describe the purpose and operation of ACLs

Describe the process of creating and editing ACLs

Explain the processes involved in testing packets with ACLs

Describe standard and extended ACLs

Approach

A multistep approach was also taken while collecting data and compiling this report, approaches which includes:

Interview with Knowledgeable individuals with relevant experience to gather data relative to ACL.

Sourcing Journals, Books and articles from Online Database (e.g. Cisco website), to extract relevant information, in regards to ACL.

Carrying out simple Networking lab using packet tracer to understand how ACL works.

Define and describing the purpose and operation of ACLs

ACL stands for Access Control List. As the name already implies, it is used for access control. This is a router configuration command that controls whether a router permits or denies packets to pass based on criteria found in the packet header. As each packet approaches an interface with an associated ACL, the ACL is tested from top to bottom, one line at a time, looking for a pattern matching the incoming packet. A typical example of an ACL can be found in an everyday life. An example is "Take for instance the president of USA is having a birthday party at the white house. He does not know everybody coming to the party. He’s party organiser who is in this case the IT administrator, create a list of invitees known here as ACL. In that list are the names of those allowed and those not allowed. These names can be regarded as IP addresses. Next to these names, are further rules, these could be with or without a tie, a black shoe, etc. This is known as Extended ACL. The List is then given to the security at the gates either at the back or front gates. These agents are known as routers. They now have to enforce what the list contains allowing and deny guests based on the list.

ACL is a very versatile tools, it controls access both to and from network segments and can be used to implement security policies as described above. With a proper combination of an access lists, IT managers will be equipped with the power to carry out nearly any access policy they can create.

Although the deployment of ACL statement in a router, is no guarantee that the router is now a full-fledged firewall. A permit or deny rule associated with some kind of a pattern will determine the fate of that packet's chances of going through. A mask, similar to a wild card, can also be used to define how much of an IP source or destination address is to be applied to the arrangement match. The statement arrangement can also include port numbers or TCP, UDP, Telnet and ftp.

Once an ACL is created, it must be applied to either incoming or outgoing traffic on any interface for it to be effective. When an ACL is then applied to that interface, the router will then analyses every packet passing through that interface in the specified direction and will take action accordingly. However, there are a few significant rules a packet must follow when it’s being matched with an access list:

It’s always matched with each line of the access list in a sequential order, i.e., it always start with the first line, then go to line 2, then the third line, and goes on.

The packet will only be matched with lines of the access list until a match is made. Once that packet matches a line of the access list, action is taken, and no further comparisons takes place.

There is an implicit "deny" at the end of an access list, which basically means that if a packet doesn’t match up to any lines in the access list, that packet will be essentially dropped.

Processes involved in testing packets with ACLs

The order where ACL statements is placed, is very essential. When a router is determining whether to allow or refuse a packet, the Cisco Network Operating System (IOS) software tests the packet against each condition statement in the order in which those statements were created. After a successful match is found, no further condition statements are tested. Furthermore, if a condition statement that permits all traffic is created, no statements that is added later will ever be checked. In other to add any additional statements either in a standard or extended ACL, the ACL must be deleted and re-create with the new condition statements.

How ACLs Work

As mentioned above, an ACL is basically a group of statements that helps define how packets:

Enter inbound interfaces

Relay through the router

Exit outbound interfaces of the router

Figure ACL Test Matching Process

The preceding flow chart shows that ACL lines are processed in the order from top to bottom. When a packet is sent to an interface, it checks to see if there is any ACL applied to that interface in the inbound direction. If the condition is matched, the it goes to the next stage to see if there is any matching rule starting from the top to the bottom, again if the condition matches, it is then permitted or denied based on the rules and no other testing occurs on that packet; but if no ACL test matches, the packet it is denied by default.

Creating ACLs

This part of the document shows some configuration commands, global statements, and interface. ACL commands are created in the global configuration mode. Then the ACL number from 1 to 99 is specified. This tells the router to accept standard ACL statements. ACL number 100 to 199 tells the router to accept an extended ACL statements. (This shall be discussed in the next chapters). It is very important to select carefully and a logical order the Access Control List. Permitted IP protocols must be clearly specified and all other protocols should be denied. Though there is an implicit deny even if it is not stated.

This list deny traffic from all addresses in the range 172.16.1.2 to 172.16.1.255

Of901

Of901#config t

The third step is to apply the access list on the correct interface; as the access list being configured is standard access list, it is best for it to be applied as close to the destination as possible.Of901 (config) #access-list 50 deny 172.16.1.2 0.0.0.255

Of901 (config) #access list 50 permit any

Of901 (config) #interface f0/0

Of901 (config-if)#ip access-group 50 out

How ACL can be used to filter traffic and used to protect a network from viruses

ACLs can be used to filter traffic according to the "3 P's" as shown above. They are per protocol, per interface, and per direction. Only one ACL per protocol, per interface and one ACL per direction can be used. An ACL can also be used to protect a network against viruses by acting as a packet sniffer to display packets that comply with certain requisite. For example, if a virus exists on a network that's transferring out traffic over IRC port 194, an extended ACL (e.g. 103) can be created to identify that traffic.

Order of operations in which an ACL works

Routers process information on an ACL from top to bottom. When the router assesses traffic against the ACL, it commences from the beginning of the list, then works its way down, it either permits or deny traffic as it moves down. After completing the process from top to bottom going through the list, the processing then stops. That basically means whichever rule arises first takes priority. Another point is that if the top part of the ACL rejects a traffic, however a lower part of the list still allows it, the router still denies that traffic.

Other uses of ACLs.

ACLs are not just for filtering traffic, as discussed above in this document, it can be used for numerous reasons; for example to control debug output, and to control route access: (i.e. it can be used as a routing distribute-list to only permit or reject specific routes either into or out of a routing protocol). It can also be used as a BGP AS-path: i.e. to permit or deny BGP routes. Finally it is useful to encrypt traffic: For example, when encrypting traffic between two routers, information can be giving to the router as to what traffic needs encrypting, what traffic to send needs unencrypted, and what traffic to be disposed of.

The process of creating and editing ACLs

In the Cisco implementation, each supplementary criteria statement added to the configuration is attached at the end of the ACL statements. It is important to note that after a statement has be created and applied to a router, it is not possible to remove an individual statements, to remove an individual statement, the whole ACL must be deleted and re-created.

Furthermore, as stated in the previous chapter "Processes involved in testing packets with ACLs

"The order of access list statements is important in regards to determining whether to permit or deny a packet, as the IOS software tests the packet against each criteria statement in the exact order the statements was created. After a successful match has been established, no more criteria statements are checked. So for example if a statement in the ACL explicitly permits all traffic, no statements added after will obviously be checked.

A text file like notepad can be used to make any changes to an access list and then copied to the router via the command line interface (CLI) once all changes have been made. Up to two ACLs can be applied to each interface of a router: one on the inbound access list and one at the outbound access list.

Types of ACL

Since the 90’s network administrators have been using two basic ACLs: standard and extended ACLs. As discussed in this document, Standard IP ACLs filters on only the source IP address in an IP packet header, while an extended IP ACL filters on the following: Source IP address, Destination IP address and TCP/IP protocol, such as IP (all TCP/IP protocols), ICMP, OSPF, TCP, UDP, and others

Standard ACLs

A standard ACL, only allows a statement to either permit or deny traffic from an exact IP address. The destination of the packet and the ports involved do not affect processing. Standard ACL statements can be grouped in two ways: either by number or by a name. To create a numbered standard ACL, the following command is given:

Figure

The example above permits traffic from a class B network 172.16.2.0 and 172.16.1.0 and though not specified, implicitly rejects traffic from other IP address.

Standard ACLs must be placed close to the destination.Figure

Extended ACLs

Extended IP ACLS are much more flexible than standard ACLs since their conditions can match on many more criteria in a packet header. They can allow traffic to be allowed or dropped from specific IP addresses or ports or both at the same time, to specific destination IP addresses or ports or both at the same time. In an extended ACL, specification of several types of traffic such as ICMP, UDP, SMTP, and TCP can be made. To create an extended ACL, the following command is used:

Figure

Extended ACLs must be placed close to the source

Figure Processes involved in testing packets in an extended ACL

Configuration

http://2.bp.blogspot.com/-VHznCZO_kqc/USH1C2-KbVI/AAAAAAAAAU0/_sO3AL6c560/s1600/ACL.gif

Figure

Verifying ACLs

With the "show ip access-list" command, it is possible to view the contents of configured ACLs. This command does not however show which interface the acl is applied to.

Conclusion

With these considerations in mind, router access-lists can be improved. However, there are some significant issues related with ACLs that are important to consider and act upon. Therefore, it is common practice to partner filtering routers with other security systems that look at upper layer network information in order to increase security, manageability, and visibility of the enterprise security policy.

Bibliography

Popek, G. J., Goldberg, R. P. 1974. Formal requirements for virtualizable third-generation architectures. Communications of the ACM 17(7): 412-421.

VMware Inc, 2006. "Virtualisation Overview"

VMware Inc, 2006. "VMware Infrastructure Architecture Overview"

Orran Krieger , Phil McGachey , Arkady Kanevsky, Enabling a marketplace of clouds: VMware's vCloud director, ACM SIGOPS Operating Systems Review, v.44 n.4, December 2010

Fred Douglis , Deepti Bhardwaj , Hangwei Qian , Philip Shilane, Content-aware load balancing for distributed backup, Proceedings of the 25th international conference on Large Installation System Administration, p.13-13, December 04-09, 2011, Boston, MA

http://engweb.info/courses/lsndi-rmra/acls/acls.html

http://vps.trilog.com/docs/virts.htm

Introduction EIGRP vs. OSPF

Routing protocol is very important in modern communication network. A routing protocol is a protocol responsible for determining how routers communicate with each other and how packets are forwarded through the optimum path, from a source node to a destination node. Amongst all widely used routing protocols, the Enhanced Interior Gateway Routing Protocol (EIGRP) and the Open Shortest Path First (OSPF) are the most used routing protocols. EIGRP is a Cisco patented distance-vector protocol based on Diffusing Update Algorithm (DUAL). While, OSPF is a link-state interior gateway protocol based on Dijkstra algorithm (Shortest Path First Algorithm).

Types of Routing Protocols

EIGRP

This is a cisco routing protocol that combines the advantages of distance vector protocols, i.e. IGRP, with that of link-state protocols, i.e. OSPF. EIGRP uses what is known as Diffusing Update Algorithm (DUAL) to reach convergence quickly. The discussion on EIGRP covers the following topics:

EIGRP Network Topology

EIGRP Addressing

EIGRP Route Summarisation

EIGRP Route Selection

EIGRP Network Scalability

Memory

Bandwidth

EIGRP Security

EIGRP Network Topology

By default, EIGRP uses a non-hierarchical topology and it automatically summarises subnet routes of directly linked networks situated at a network number boundary. This Summarisation is sufficient for most Internet Protocol networks.

EIGRP Addressing

The first step in designing an EIGRP network is to decide how to address the network. In lots of cases, an organisation is assigned a single NIC address to be allocated in a corporate network. Variable-length sub network masks (VLSM’s) are used to save address space and EIGRP supports the use of VLSM’s.

EIGRP Route Summarisation

With EIGRP, subnet routes of directly attached networks are automatically summarised at network number boundaries. Furthermore, a network manager can configure route Summarisation at any interface with any bit boundary, thereby allowing ranges of networks to be summarised at random.

EIGRP Route Selection

Routing protocols match route metrics to pick the optimum route from a group of possible routes.

When EIGRP summarises a group of routes, it utilises the metric of the best route in the summary as the metric for the summary.

EIGRP Network Scalability

Network scalability is restricted by two factors: operational and technical issues. Operationally, EIGRP provides easy configuration and growth. Technically, with the growth of a network, EIGRP uses resources at less than a linear rate

Memory

A router running EIGRP protocols, stores all routes that has been advertised by neighbors in other to adapt quickly to different routes. It is important to note that the more neighbors a router has, means the more memory that router uses.

Bandwidth

EIGRP uses limited updates. Limited updates are created only when there is a change; during the update, only the changed information is sent, and only routers affected, received this changed. Due to this fact, EIGRP tends to be very efficient in its usage of bandwidth.

EIGRP Security

EIGRP is a cisco protocol available only on Cisco routers. This prevents any accidental routing interference that can be caused by hosts in a network. In addition, route filters can basically be set up on any interface of a router, to prevent propagating routing information wrongly.

OSPF

OSPF is an Interior Gateway Protocol that was developed for use in Internet Protocol based networks. OSPF assigns routing information amongst routers belonging to a single autonomous systems. An AS is a collection of routers that exchanges routing information through a shared routing protocol. OSPF protocol is based exclusively on shortest-path-first technology, also acknowledged as link-state technology.

There are two design activities that are very important to a successful OSPF implementation: They are Address assignment and Definition of area boundaries. Making sure that these activities are properly planned and implemented, makes a difference in an OSPF implementation. Each of these activities are addressed in more detail below:

OSPF Network Topology

OSPF Addressing and Route Summarisation

OSPF Route Selection

OSPF Convergence

OSPF Network Scalability

OSPF Security

OSPF Network Topology

OSPF works better in a hierarchical or ordered routing environment. The first and most significant decision when planning an OSPF network, is to decide which routers and links that should be incorporated in the backbone and which links and routers are to be contained within in each area.

OSPF Addressing and Route Summarisation

Address assignment and route Summarisation are inextricably connected when designing an OSPF network. To create a scalable OSPF network, route Summarisation is to be implemented. To construct an environment that is capable of supporting route Summarisation, an effective hierarchical addressing scheme needs to be implemented. The addressing structure that a network administrator implements can have a deep impact on the performance and scalability of that OSPF network.

OSPF Route Summarisation

Route Summarisation is very appropriate for a reliable and scalable OSPF network. The efficiency of route Summarisation, and an OSPF implementation in general, centers on the addressing scheme that has been adopted. Summarisation is to be configured manually in OSPF and it takes place between each area and the backbone area.

Separate Address Structures for Each Area

One of the quickest way to allocate ip addresses in OSPF is to allocate a network number that is different, for each area. If this is done, a network administrator can then create a backbone and several areas, and assign a different IP network number to each area.

Bit-Wise Subnetting and VLSM

As previously discussed, to save address space, Bit-wise subnetting including VLSM masks can be used together, especially in a design where a Class C network address is subdivided with an area mask and distributed between 8 areas.

Route Summarisation Techniques

Route Summarisation is vital in an OSPF environment because of the increases in the stability of the network. If route Summarisation is being used, changes of routes inside an area do not need to be altered in the backbone areas.

OSPF Route Selection

When designing an OSPF network for efficient route selection, the following consideration should be made:

Load Balancing in OSPF Network

Internetwork topologies are mainly designed to provide some redundant routes to avoid a divided network. Redundancy is moreover useful to provide extra bandwidth for high traffic areas. If an equal-cost routes between nodes exist, Cisco routers capable of automatically load balancing.

OSPF Convergence

One good thing about OSPF is its ability to rapidly adapt to topology changes. There are two mechanisms to routing convergence:

Recognition of topology changes: OSPF has two types of technique it uses to detect any topology changes. Interface status changes (i.e. carrier failure on a serial link) is the first one. The second method is known as "dead timer". This is when OSPF fails to get a hello packet from its neighbor within a timing space. Once this timer expires, the router believes the neighbor is down.

Recalculation of routes: Once an error is detected, the router that detected that error, transmits a link-state packet with the change of information to all the routers in the area. All the routers then recalculate all their routes using the Dykstra algorithm. The name Dijkstra's comes from a Dutch computer scientist Edsger Dijkstra, published in 1959, he stated that "The time required to run the algorithm depends on a combination of the size of the area and the number of routes in the database".

OSPF Network Scalability

Network scalability is both affected by operational and technical problems. Operationally in the sense that OSPF networks should be designed in a way that areas do not need to be divided to accommodate any growth. Address spaces ought to be reserved to accommodate possible addition of new areas. In terms of Technical aspect, scaling is determined by the utilisation of three resources: memory, CPU, and bandwidth. Also the ability to scale an OSPF network rest on the overall network configuration and addressing system. As pointed out in the preceding discussions in regards to network topology and route Summarisation, implementing a hierarchical addressing environment and a well-structured address assignment will be the most essential reasons in defining the scalability of a network.

OSPF Security

OSPF contains an optional authentication field similar to IPV4 packets. All routers within an area must come to an agreement on the value of the authentication field. Since OSPF is a standard protocol available on several platforms, using the authentication field avoids the accidental startup of OSPF in a less controlled platform on a network and thereby reduces the likelihood for instability. In addition, all routers need to have the same form of data inside an OSPF area. Due to this fact, it is impossible to provide security with the use route filters in an OSPF network.

Analysis

Now that the technical merits and limitations of these routing protocols have been discussed in the preceding chapters, an analysis is therefore required of these information’s. OSPF is an "open standard" form of protocol. This basically means it possible to be implemented on every platform. This is an improvement over EIGP, which is a proprietary standard from Cisco. Nevertheless, this is the only known advantage of OSPF over EIGRP.

As previously pointed out in the preceding chapters, OSPF is designed mainly for hierarchical form of networks with a well-defined backbone area. In addition, when compared to EIGRP, OSPF tend to use more of bandwidth to propagate its topology, it involves the use of more router CPU time and memory. OSPF from lab carried out so far, tends to be more difficult to implement than EIGRP.

On the other hand, EIGRP is a Cisco proprietary routing protocol used solely in Cisco routing products EIGRP, as stated previously in the preceding chapters, it has several advantages over OSPF. These include:

EIGRP doesn’t require any form of hierarchical network design to efficiently operate. It automatically summaries its routes where applicable.

EIGRP can be configured to use bandwidth, reliability, and load when calculating the best routes not like OSPF, which only takes into consideration bandwidth, when calculating the cost of route.

EIGRP has better control on timing issues than OSPF, such as hold times and hello intervals. This allows for a greater flexibility when dealing with wireless connections, where these intervals needs to be fine-tuned to a particular device.

EIGRP is less difficult and has less cost in terms of time involved in configuration.

Conclusion

EIGRP is a very efficient protocol that greatly enhances network administrators’ visibility, preventative and forensic analysis capabilities. Very easy to configure on most networks. It saves a huge amount of CPU cycles and bandwidth by relying on its neighbors to update routing tables. Since any router within a common network that has seen a router’s hello packet is considered its neighbor, a router using EIGRP will have quick access to the topology information of that network. By using this protocol, you’ll tax your routing hardware less, thereby making your network run more efficiently and smoothly.

Using a combination of Route Explorer tools along with other common administrative tools, network administrators and engineers can prevent many SIA errors and more rapidly detect and diagnose the cause of many more SIA events when they happen. The result is a reduction of costly network downtime, and freeing of resources to focus on proactive service availability improvements.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now