What Is The Network Architecture

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Traditional networks have been suffering lately to meet the requirements of the enterprise today. Users need more speed and reliable connections, and the carriers as well, they want to provide the best materials and gain the profit of that. Today the network is not like when the first time established in 1990s, because today we have 5or 6 times the number of users who are using these networks, in the same time technologies and research has been improved so much since then. Today we need to use these new technologies and use them to improve the architecture of the network and provide the prober speed that could move in parallel with how life style of user have been gone. The open network foundation (ONF) has started the idea of changing the network architecture by using the Software-Defined Network to transform the network architecture. This software-Defined Network will provide a data planes and control ones as well, and they are decoupled. The network will be very smart and centralized. Applications will be the underlying the network infrastructure. These steps will provide the stability, unprecedented programmability, automation, and network control. These results will be very fixable and will cover the used and carriers business needs.

Open network foundation (ONF)

They are a non-profit foundation, who will be taking care of all the procedures to provide a reliable networking area to user and carriers. They will be building the Software-Defined Network and provide all the equipments and elements that they will use to provide these kinds of networks. They will be proving elements like the Open-Flow protocols, which will make the communication between the data and the control planes in the system. Open-Flow is the first standard interface was designed especially for software-defined Networks. This Open-Flow will provide the high speed and performance that the user and carrier looking for, in addition they will provide the traffic control across multiple users and carriers in the network. Today Open-Flow provides so many benefits to the network such as, controlling and managing the centralized devices in the network. Will improve and manage the automation in the application to underline the network by using common APIs.

The need of the new architecture

Today we are using everything thru the network. For example, we are using mobile devices, we have the cloud services such as Drop-Box, Google drive, and all the same kind of serves, which need high performance, network to provide a better service to the user. Not to forget the internet servers. By using the new architecture we will have a new ways to change the traffic patterns, so by changing these new ways of traffic we will provide an easy ways to send and receive the information thru the network without any problems. Privacy will be a priority with new technology, because we will not be in need of IT to check the personal information if needed to access these mobile devices, while these device will be protected by the protecting corporate data and intellectual property by the network. All the stated services, which will be using by the network, will be in need of more bandwidth because they will be using a big data in that case.

Limitation of current network technology

Meeting the current market is hard with the old technologies, because new user, new technologies that enters the market such as mobile devices, tablets and all other devices, which have been controlling the market lately. Virtually impossible with traditional network architectures Faced with flat or reduced budgets, enterprise IT departments are trying to squeeze the most from their networks using device-level management tools and manual processes. Carriers face similar challenges as demand for mobility and bandwidth explodes. Existing network architectures were not designed to meet the requirements of today’s users, enterprises, and carriers. Any network designer will disagree with the new network, because they have been working with old architecture and they want to move to the new network with different architecture and with controlling center. Complexity that leads to stasis: Networking technology to date has consisted largely of discrete sets of protocols designed to connect hosts reliably over arbitrary distances, link speeds, and topologies. To meet business and technical needs over the last few decades, the industry has evolved networking protocols to deliver higher performance and reliability, broader connectivity, and more stringent security. Now other problem will pop up on the face of the network designers and they have to think about protocols tend to be defined in isolation, but with each problem we need solution for each specific problem without any fundamental abstractions. This has resulted in one of the primary limitations of today’s networks: complexity. For example, to add or move any device, IT must touch multiple switches, routers, firewalls, Web authentication portals, etc. and update ACLs, VLANs, quality of services (QoS), and other protocol-based mechanisms using device-level management tools. In addition, network topology, vendor switch model, and software version all must be taken into account. Due to this complexity, today’s networks are relatively static as IT seeks to minimize the risk of service disruption.

The static nature of networks is in stark contrast to the dynamic nature of today’s server environment. Here they had to increase the number of the hosts in the server virtualization, which is required by the network connectivity, and basics of the physical layer, and the location of the hosts in that layer. Before virtualization or applications resided on each server and the exchange traffic with some selected clients. Today we have some many options of application that could help the distribution of the virtual machines among the network. Virtual machines could help to exchange the traffic over the network especially with the high demand on the traffic flow thru the whole network. Virtual machines have the rule of optimizing the rescaling the servers workload, which will help the physical end point over time. Virtual machines today are really solving a lot of aspects that the network is suffering from traditional network, from addressing schemes and namespaces to the basic notion of a segmented, routing-based design. The current or the traditional network really is not helping the user or the carrier, for example in case of information tech, where they cannot help the users as they suppose to, because the limitation of the network in the Inconsistent policies, where they have to configure hundred and thousand of uses and devices each time. However when they use the new architecture network, they will have the ability to configure one device or server, which will help million of users in a click of a button. The traditional network has a limitation of inability to scale as demands on the data center rapidly grow, so too must the network grow. However, the network becomes vastly more complex with the addition of hundreds or thousands of network devices that must be configured and managed. IT has also relied on link oversubscription to scale the network, based on predictable traffic patterns; however, in today’s virtualized data centers, traffic patterns are incredibly dynamic and therefore unpredictable. Big corporation such as Google, Yahoo, Facebook, and Twitter, they are facing a very hard jobs, because they have a very huge number of employees, where they provide them information and algorithms’ in parallel, which the traditional network cant very much handle the traffic over there.

The carrier dependence

Today each vendor or carrier is trying to deploy new capabilities and services in rapid response to changing business needs or user demands. However, their ability to respond is hindered by vendors’ equipment product cycles, which can range to three years or more. Lack of standard, open interfaces limits the ability of network operators to tailor the network to their individual environments. Here we can see there is something missing between the markets or the end user and the carrier, which make the industrial to come to their end point basically because they are missing the communication between the two points. The industrial found that the solution is the Software-Defined Network and the architecture they came with and standards, which will help all the sides in the markets from carrier to the end point, which are the devices that the users are using.

Software-Defined Network

The software-defined networking is a system separate, which helps the system to approach and build computers network by separating abstracting the elements of these systems. Software-Defined Networking (SDN) is using different strategies then the traditional ones that the regular networking ways. (SDN) is disaggregating the regular ways and integrating the networking stack to provide better networking flexibility. By using SDN we will have better support of cloud services. We can use the mass customization of the network to have those services. SDN has different idea about the networking, because it compress and gather different groups of technologies such as open data, control, and management planed of these network to be more involved easily in boarder of a framework in the application programming interface of we can say it is the user interface. The range of the SDN technologies will be defined and explained in this paper, and basically how we can apply to a multiple clouds.

C:\Users\student\Desktop\paper\sdn-apis.png

Figure 2: Software- Defined Network Idea

From the above figure (figure2) we can see the Software-Defined network, where we can see the different layers that the new architecture consist of such as control plane, open APIs, data Plane, and the hosts. However, we have more important components that are not visible for the user or the designer or the user and provide very important work that none of the other component could do, and they are a part of the open APIs, such as the open flow, visibility and the configuration, and I’m going to talk about each one of them in this paper starting with the important of each part in the system.

Open Flow

Open-Flow is the first standard communications interface defined between the controls and forwarding layers of Software-Defined Network (SDN) architecture. Open-Flow allows direct access to and manipulation of the forwarding plane of network devices such as switches and routers, both physical and virtual (hypervisor-based). Here we can tell that we are missing of the open interface to the upcoming plane, which has lead us to the characterization of the traditional network that we have today as monolithic, closed, and mainframe-like. Today we don’t have anything that could do what open-flow dose in the network, and none of the protocols that we have could do it as well, as it needed to move network control out of the networking switches to logically centralized control software. The Open-Flow protocol is implemented on both sides of the interface between network infrastructure devices and the SDN control software. Open-Flow uses the concept of flows to identify network traffic based on pre-defined match rules that can be statically or dynamically programmed by the SDN control software. It also allows IT to define how traffic should flow through network devices based on parameters such as usage patterns, applications, and cloud resources. Since Open-Flow allows the network to be programmed on a per-flow basis, Open-Flow-based SDN architecture provides extremely granular control, enabling the network to respond to real-time changes at the application, user, and session levels. Current IP-based routing does not provide this level of control, as all flows between two endpoints must follow the same way to the network, and here we don’t really care about the requirements of that network.

500px-OpenFlowSwitch.png

Figure 3: Open Flow Switch specification

The open flow protocol is really everything in the network, because of the important connection it makes between the different layers of the network. We can say it’s the key, which enable software-defined network and currently is the only standardized SDN protocol that allows direct manipulation of the forwarding plane of network devices, while in the beginning we applied all the of that to the Ethernet – based network. This is not it for open flow, which is could be extended to much more. Open-Flow-based SDNs can be deployed on existing networks, both physical and virtual. Network devices can support Open-Flow-based forwarding as well as traditional forwarding, which makes it very easy for enterprises and carriers to progressively introduce Open-Flow-based SDN technologies, even in multi-vendor network environments.

Benefits of Open-Flow-Based Software-Defined Networks

Centralized control of multi-vendor environments

Reduced complexity through automation

Higher rate of innovation

Increased network reliability and security

More granular network control

Better user experience

OpenFlow Controller

OpenFlow controller is a hardware switch that supports the last OpenFlow. Now there are so many options that come to reference Learning Switch Controller. This controller comes with the Reference Linux distribution, and can be configured to act as a hub or as a flow-based learning switch. It is written in C. some examples of the OpenFlow controller as bellow:

NOX: NOX is a Network Operating System that provides control and visibility into a network of OpenFlow switches. It supports concurrent applications written in Python and C++, plus includes a number of sample controller applications.

Beacon: Beacon is an extensible Java-based OpenFlow controller. It was built on an OSGI framework, allowing OpenFlow applications built on the platform to be started/stopped/refreshed/installed at run-time, without disconnecting switches.

Helios: Helios is an extensible C-based OpenFlow controller build by NEC, targeting researchers. It also provides a programmatic shell for performing integrated experiments

BigSwitch: BigSwitch released a closed-source controller based on Beacon that targets production enterprise networks. It features a user-friendly CLI for centrally managing your network.

SNAC: SNAC is a controller targeting production enterprise networks. It is based on NOX0.4, and features a flexible policy definition language and a user-friendly interface to configure devices and monitor events.

Maestro: Maestro is an extensible Java-based OpenFlow controller released by Rice University. It has support for multi-threading and targets researchers.

Figure 4: OpenFlow Controller

Software-Defined Network (SDN) in the Data Plane

Engineers have started working on SDN for a long time, so in 1980s they have involved virtual LNA (Local-Area Networks) to make physical reach and the scale of Ethernet networks larger. Bringing user and hosts in the same group was one of the priorities of engineers to make them in the same logical group with less concerned by the physical and exact location. One of the problems faced engineers to implement the pervious task was that server virtualization has asked for a lot of limitations of traditional LNAs. The limitations weren’t just software, but hardware as well, for example, server virtualization I / O intensive, but the layers as well has limitations, like layer 2 domains are limited to 4k VLANs. Other problems were that traffic, which is hard or near impossible to cross layer 2 boundaries. To bypass all the challenges that SDN faced in the previous part, we have to fully vitalizing the network by creating our own overlay tunnel, so we can provide the right service without any problems.

Designing the tunnels was not the problem. Because they already have done that part, but the problem was to be that these tunnels had to meet all requirements of the data centers, which will control these tunnels, and these tunnels will have to allow administrators to develop their own logical overlay network to compute and storage resources as they need them. This work can be conducted without interfering with the functioning of the physical network, reducing overall deployment time and confining potential errors or disruptions to the logical service in question. In sum, overlay networks deliver more efficient asset utilization by aligning networking more naturally with VM requirements. Such networks also allow network operators to deploy them rapidly and flexibly in a fine-grained, workload-centric way, as part of a specific end-to-end IT service.

Figure 5: Basic SDN OperationSdn1.png

Network Virtualization and Ethernet Fabrics

We are doing the network virtualization to solve problems that modern virtualized data center has, and it is really have the job done, because it add complexity of different sort for both overlay and physical. Now we have a problem, which is managing of the physical and the overlay networks one to another is not available, because the visibility for both, in other meaning, since they are not visible to each other we cannot control them concurrently. Also to use the overlay network fully, the automation of the physical network will be critical. Ethernet fabrics: are evolutionary forms of Ethernet that provide flatter, highly available network architecture with some degree of automation.

Brocade VCS Fabric technology: we need this to create data efficient and organize the data center network. We have defined Ethernet fabrics before and now we now that Ethernet fabrics work on Brocade VCS Fabric technology and Brocade VCS Fabric technology give unmatched virtual machines, and provide and compare these to the traditional network architectures. Brocade VCS Fabric technology helps the network by providing more flexibility and information technologies

Efficient, reliable delivery of tunneled traffic

In this stage the network expands and made the scope and the flexibility of the I\O aggregation bigger and more reliable by policy, but the traffic problem still relies on the performance, resilience and service of the physical infrastructure. The company has used VCS Fabric technology provides a very high reliability, low-latency physical transport foundation for network virtualization. VCS technology provides per-packet round-robin load balancing even within overlay environments, improving Link Aggregation Group (LAG) utilization and greatly reducing the potential for tunnel congestion problems.

Managing Software-defined Network within Clouds

In most papers or discussions of Software-Defined network stop at the control and data planes; however, abstraction of the management layer is critical to enable orchestration applications to interact more holistically with the networking elements, if a cloud-optimized solution is to be considered. Moreover, this provides interfaces for cloud platform systems that enable end-to-end management of all the elements of the cloud such as compute, storage, and network. Applications built on these platforms help customers to deploy Infrastructure-as-a-Service architectures rapidly and to manage these services with an easy ways for the user and the carrier.

OpenStack is one such cloud platform that provides the infrastructure to build cloud services and applications that are simple to implement and are massively scalable. Formed in July 2010 by Rackspace Hosting and NASA, OpenStack is an open-source community that has had significant traction with over 2,685 people and 156 member companies. OpenStack has various projects or components—including Compute (Nova), Storage (Swift), Network (Quantum), and Dashboard (Horizon)—that are released on a regular basis. OpenStack is optimized for the cloud, as it enables interoperability between different vendor clouds, is designed to be massively scalable, and provides seamless management between private and public clouds.

The company is committed to OpenStack and is optimizing its portfolio of products for providing plug-ins to the networking layers of OpenStack as well as to use the OpenStack solutions will provide an important element as current architectures transition to cloud-optimized architectures. Providing a virtualized networking platform as a foundation, the company along with OpenStack will facilitate automation of resource deployment, enabling customers to deploy multi-tier applications in data-centers.

Archhtml.png

Figure 6: OpenStack Architecture

Open Flow Basics

In this section I have to go back to the OpenFlow, just because it is the main part, and the most important part of the software-defined network (SDN). OpenFlow as I stated previously in this paper is the part that SDN basically is based on, the below figures will explain the basics of OpenFlow

Figure 8: How OpenFlow works

Figure 7: OpenFlow idea

Conclusion and discussion

Cloud computing is to change the hardware into virtual machines that will move everything that the user is doing in his/her usual daily life style computing, or business into cloud information, that will be available whenever and wherever they need it. Security and speed is the most important things that the regular user is really thinking about. The idea is to have everything in an online organization that will be working like a traffic light in the street, where will be taking care of all the traffic in the system and organize them, where will provide a reliable speed and secure connection for the user and the carrier who be proving this service. Software-defined network SDN promises to make high-capacity networks cheaper to build and especially to re-configure on the fly - as well as potentially faster and more efficient. As more and more computing moves to the cloud, those network improvements will be critical to keeping everything affordable and available. The problem Is that the Software-defined Network is very complicated, so many software professionals were confused with it, but they suggested that we can sliced the system as bread to be easier to the regular user to deal with it. The idea is basic that we can think about it like we will have our routers, our switches, and lots and lots of CAT5 and CAT6 cable strung around: all physical hardware that, when connected in a certain way, defines the flow of data in the organization. Like laying down a network of highways, planning a network takes time; it has to be done right the first time because shuffling things around afterward is expensive. Because everything is going cloud, we will have lot and a lot of rules to make sure that we are secure and everything is normal, like we are at home without internet. SDN will take time to come out until we come up with these rules. SDN could be the end of the Cisco killer because it allows network engineers to support a switching fabric across multi-vendor hardware and application-specific integrated circuits. Currently, the most popular specification for creating a software-defined network is an open standard called OpenFlow. OpenFlow lets network administrators remotely control routing tables.

Figure 9: Software-Defined Network (SDN .



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now