The Changes Of The Internet On Humans

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

CHAPTER 6

The Internet has brought dramatic changes in the interaction between individuals, businesses, and governments. Moreover, global access to the Internet has become ubiquitous. With broadband networks, large amounts of data can be transferred rapidly between parties over the Internet. Users take these advances for granted until security attacks cripple the global Internet. Attacks spread rapidly through the same broadband networks that made the Internet revolution possible. The cost of these attacks to individuals, companies, and governments has increased rapidly as well. According to Trend Micro, virus attacks cost global businesses an estimated $55 billion in damages in 2003, up from about $20 billion in 2002.

In academia, researchers have proposed the Internet2 and Supernet networks to provide a networking test bed for all university campuses in the United States to use for real-time collaboration at speeds that are 1000’s of times faster than those available on currently deployed campus networks. Many scientists predict that Internet data network speed will increase from the 10 Gbps to the Tbps range. With this explosive growth in Internet ecommerce and the advent of the terabit network on the horizon, security experts must reevaluate current security solutions. Clearly, the critical challenge facing the Internet in the future is establishing security and trust. Many researchers argue that automatic detection and protection is the only solution for stopping fast-spreading worms such as the SQL Slammer and Code Red.

Realizing the fact that traditional intrusion detection systems have not adapted adequately to new networking paradigms like wireless and mobile networks and nor have they scaled to meet the requirements posed by high-speed (gigabit and terabit) networks, this research presents techniques to address the threats posed by network based denial-of-service attacks in such networks. Although there have been recent attempts [37, 39] to train anomaly detection models over noisy data, to the best of our knowledge, the research presented in this dissertation is the first attempt to investigate the effect of incomplete data sets on the anomaly detection process. This chapter describes the work performed during this dissertation and highlights significant achievements and possible extensions to the existing state of the art.

6.1 SUMMARY

This research provides a methodology to detect intrusions when only a subset of the audit data is available. To achieve this goal, the proposed anomaly detection scheme integrated a number of components in a unique way to overcome the limitations of previous intrusion detections systems. To tackle the problem of intrusion detection in high bandwidth networks the following modules were implemented and tested.

Sampling for data reduction: We believe that the key to efficient and cost-effective intrusion detection in high bandwidth networks is implementing a system that samples the incoming network traffic instead of attempting to parse and analyze all of it. From this perspective, in this dissertation, we have presented an adaptive sampling algorithm that uses the weighted least squares predictor to dynamically alter the sampling rate based on the accuracy of the predictions. Results have shown that compared to simple random sampling, the proposed adaptive sampling algorithm performs well on random, bursty data. Our simulation results have shown that the proposed sampling scheme is effective in reducing the volume of sampled data while retaining the intrinsic characteristics of the network traffic. In addition, we have also demonstrated that the proposed sampling scheme preserves the self-similar property of network traffic in the sampling process.

Bloom filters based fast flow aggregation: In high speed networks, flow aggregation and state maintenance becomes a problem as the number of flows increase. As a result, delays are introduced in the intrusion detection process which, in the worst case scenario, makes the intrusion detection system totally ineffective. To mitigate the delay problem and provide for a fast lookup for maintaining the state of an incoming flow, this dissertation proposes a Bloom filter based flow aggregation scheme.

Improvements in speed of clustering: The classical EM algorithm has many desirable features that includes a strong statistical basis, theoretical guarantees about optimality, easily explainable results, robustness to noise and to highly skewed data. Nevertheless, the classical EM algorithm also has several disadvantages. In general, the EM algorithm converges to a poor locally optimal solution and requires a number of passes over the dataset for every iteration. To address these problems, this dissertation proposed improvements in speed by employing sufficient statistics and learning steps to accelerate convergence and reduce the number of passes per iteration.

Anomalous flow detection Lastly based on the improvements described above, this dissertation also proposes a novel a intrusion detection scheme that is capable of detecting anomalous/intrusive flows in the incoming network traffic which would be indicative of a network based denial of service attack when only a subset of the audit data is available. Results described above, have shown that the proposed system has a high rate of detection even when only 75% of the audit data is present.

6.2 FUTUREWORK

In the last twenty years, intrusion detection systems have slowly evolved from host- and operating system-specific applications to distributed systems that involve a wide array of operating systems. The challenges that lie ahead for the next generation of intrusion detection systems and, more specifically, for anomaly detection systems are many.

Over the years, numerous techniques, models, and full-fledged intrusion detection systems have been proposed and built in the commercial and research sectors. However, there is no globally acceptable standard or metric for evaluating an intrusion detection system. Although the ROC curve has been widely used to evaluate the accuracy of intrusion detection systems and analyze the tradeoff between the false positives rate and the detection rate, evaluations based on the ROC curve are often misleading and/or incomplete [44,113]. Recently, several methods have been proposed to address this issue [7, 44, 113]. However, a majority of the proposed solutions rely on parameters values (such as the cost associated with each false alarm or missed attack instance) that are difficult to obtain and are subjective to a particular network or system. As a result, such metrics may lack the objectivity required to conduct a fair evaluation of a given system. Therefore, one of the open challenges is the development of a general systematic methodology and/or a set of metrics that can be used to fairly evaluate intrusion detection systems.

An important aspect of intrusion detection, which has also been proposed as an evaluation metric [102, 109, 116], is the ability of an intrusion detection system to defend itself from attacks. Attacks on intrusion detection systems can take several forms. As a specific example, consider an attacker sending a large volume of non-attack packets that are specially crafted to trigger many alarms within an intrusion detection system, thereby overwhelming the human operator with false positives or crashing the alert processing or display tools. Axelsson [6], in his 1998 survey of intrusion detection systems, found that a majority of the available intrusion detection systems at that time performed very poorly when it came to defending themselves from attacks. Since then, the ability of intrusion detection systems at defending themselves from attacks has improved only marginally.

Lastly, an increasing problem in today’s corporate networks is the threats posed by insiders, viz., disgruntled employees. In a survey [88] conducted by the United States Secret Service and CERT of Carnegie Mellon University, 71% of respondents out of 500 participants reported that 29% of the attacks that they experienced were caused by insiders. Respondents identified current or former employees and contractors as the second greatest cyber security threat, preceded only by hackers. Configuring an intrusion detection system to detect internal attacks is very difficult. The greatest challenge lies in creating a good rule set for detecting "internal" attacks or anomalies. Different network users require different degrees of access to different services, servers, and systems for their work, thus making it extremely difficult to define and create user- or system-specific usage profiles.

Although there is some existing work in this area (e.g., [77, 94]), more research is needed to find practical solutions.

6.3 CONCLUSION

This chapter has presented conclusions based upon the research results, and recommended areas of future research. The goal of this research was to provide a methodology to detect network based attacks with incomplete audit data. The proposed scheme, SCAN, attempts to fill a niche in the intrusion detection domain, by attempting to address the problem of detecting network based denial-of-service attacks in high performance, high availability and high speed networks. By employing an intelligent sampling scheme, SCAN reduces the computational complexity [20] by reducing the volume of audit data that is processed without losing the intrinsic characteristics of the network traffic. In addition, SCAN also employs an improved Expectation-Maximization algorithm based clustering technique to impute the missing values and further increase the accuracy of anomaly detection.

The results of this dissertation have demonstrated that the research goal was achieved. For reference details of the papers published by the author during the course of the dissertation work, the reader is directed to the Vita at the end of the dissertation.

---------------------------------

New

---------------------------------

6.1 CONCLUSION AND FUTURE EXTENSION

The Online has brought dramatic changes in the interaction between people, companies, and government authorities. Moreover, international entry to the Online has become popular. With high speed internet systems, considerable amounts of information can be transferred quickly between parties over the Online. Customers take these advances for granted until protection strikes cripple the international Online. Attacks spread quickly through the same high speed internet systems that made the Online revolution possible. The price of these strikes to people, companies, and government authorities has increased quickly as well. According to Trend Micro, virus strikes price international companies an estimated $55 billion dollars in damages in 2003, up from about $20 billion dollars in 2002.

In universities, scientists have suggested the Internet2 and Supernet systems to offer a social media test bed for all school campuses in the U. s. Declares to use for real-time collaboration at speeds that are 1000’s of times faster than those available on currently implemented campus systems. Many scientists predict that Online information program amount will improve from the 10 Gbps to the Tbps range. With this explosive growth in Online ecommerce and the advent of the terabit program on the horizon, protection experts must reevaluate present protection alternatives. Clearly, the critical task facing the Online later on is establishing protection and trust. Many scientists argue that automatic recognition and protection is the only remedy for stopping fast-spreading worms such as the SQL Slammer and Code Red.

Realizing the fact that traditional strike recognition methods have not adapted adequately to new social media paradigms like wireless and mobile systems and nor have they scaled to are eligible provided by high-speed (gigabit and terabit) systems, this analysis presents methods to deal with the risks provided by program centered denial-of-service strikes in such systems. Although there have been recent efforts [37, 39] to train abnormality recognition designs over noisy information, to the best of our knowledge, the analysis provided in this thesis is the first attempt to investigate the effect of imperfect information sets on the abnormality recognition procedure. This section describes the perform conducted during this thesis and highlights significant achievements and possible extensions to the present condition of the art.

6.1 SUMMARY

This analysis provides a technique to identify uses when only a part of the review information is available. To achieve this objective, the suggested abnormality recognition plan integrated a variety of components in a unique way to overcome the limitations of previous strike detections methods. To tackle the issue of strike recognition in great information systems the following modules were implemented and tested.

Sampling for information reduction: We believe that the key to efficient and cost-effective strike recognition in great information systems is implementing a program that samples the inbound program visitors instead of trying to parse and assess all of it. From this perspective, in this thesis, we have provided an flexible testing criteria that uses the weighted least pieces predictor to dynamically alter the testing amount depending on the precision of the forecasts. Outcomes have proven that compared to simple unique testing, the suggested flexible testing criteria performs well on unique, bursty information. Our simulator results have proven that the suggested testing plan is efficient in decreasing the variety of tested information while retaining the innate features of the program visitors. Moreover, we have also confirmed that the suggested testing plan maintains the self-similar property of program visitors in the testing procedure.

Bloom filters centered quick circulation aggregation: In high-speed systems, circulation gathering or amassing and condition maintenance becomes a issue as the variety of moves improve. Consequently, delays are introduced in the strike recognition procedure which, in the worst scenario, makes the strike recognition program totally ineffective. To minimize the delay issue and offer for a quick lookup for maintaining the condition of an inbound circulation, this thesis suggests a Blossom filter centered circulation gathering or amassing plan.

Improvements in amount of clustering: The traditional EM criteria has many desirable features that includes a strong mathematical basis, theoretical assures about optimality, easily explainable results, robustness to noise and to highly skewed information. Nevertheless, the traditional EM criteria also has several disadvantages. In common, the EM criteria converges to a poor regionally optimal remedy and requires a variety of goes over the dataset for every version. To deal with these problems, this thesis suggested upgrades in amount by using sufficient statistics and learning steps to speed up unity and reduce the variety of goes per version.

Anomalous circulation recognition Finally depending on the upgrades described above, this thesis also suggests a novel a strike recognition plan that is capable of discovering anomalous/intrusive moves in the inbound program visitors which would be indicative of a program centered refusal of assistance strike when only a part of the review information is available. Outcomes described above, have proven that the suggested program has maximum recognition even when only 75% of the review information is present.

6.2 FUTUREWORK

In the last many decades, strike recognition methods have slowly evolved from host- and managing system-specific applications to distributed methods that involve a variety of operating-system. The difficulties that lie ahead for the next generation of strike recognition methods and, more specifically, for abnormality recognition methods are many.

Over the decades, numerous methods, designs, and full-fledged strike recognition methods have been suggested and built in the commercial and analysis sectors. However, there is no globally acceptable standard or measurement for evaluating an strike recognition program. Although the ROC bend has been widely used to assess the precision of strike recognition methods and assess the tradeoff between the incorrect advantages amount and the recognition amount, assessments depending on the ROC bend are often misleading and/or imperfect [44,113]. Recently, several methods have been suggested to deal with this issue [7, 44, 113]. However, a most of the suggested alternatives rely on factors principles (such as the price associated with each incorrect alarm or missed strike instance) that are challenging to obtain and are subjective to a particular program or program. Consequently, such analytics may lack the objectivity required to conduct a fair assessment of a given program. Therefore, one of the open difficulties is the development of a common systematic technique and/or a set of analytics that can be used to fairly assess strike recognition methods.

An critical facet of strike recognition, which has also been suggested as an assessment measurement [102, 109, 116], is the capability of an strike recognition program to defend itself from strikes. Attacks on strike recognition methods can take several forms. As a particular example, consider an attacker sending a huge variety of non-attack packages that are specially crafted to trigger many alarms within an strike recognition program, thereby overwhelming the human operator with incorrect advantages or failing the alert processing or display tools. Axelsson [6], in his 1998 study of strike recognition methods, found that almost all the available strike recognition methods at that time conducted very poorly when it came to protecting themselves from strikes. Since then, the capability of strike recognition methods at protecting themselves from strikes has enhanced only partially.

Lastly, an increasing issue in today’s corporate systems is the risks provided by associates, viz., unhappy workers. In a study [88] conducted by the U. s. Declares Secret Service and CERT of Carnegie Mellon University, 71% of members out of 500 members reported that 29% of the strikes that they experienced were caused by associates. Respondents identified present or former workers and contractors as the second biggest cyber protection threat, beat only by hackers. Configuring an strike recognition program to identify internal strikes is very challenging. The biggest task lies in creating a good rule set for discovering "internal" strikes or anomalies. Different program users require different degrees of entry to different services, servers, and methods for their perform, thus making it extremely hard to define and create user- or system-specific usage profiles.

Although there is some present perform in this area (e.g., [77, 94]), more analysis is needed to find practical alternatives.

6.3 CONCLUSION

This section has provided results dependant on the analysis results, and recommended areas of future analysis. The objective of this analysis was to offer a technique to identify program centered strikes with imperfect review information. The suggested plan, SCAN, efforts to fill a niche in the strike recognition domain, by trying to deal with the issue of discovering program centered denial-of-service strikes in powerful, great availability and high-speed systems. By using an intelligent testing plan, SCAN reduces the computational complexity [20] by decreasing the variety of review information that is processed without losing the innate features of the program visitors. Moreover, SCAN also employs an enhanced Expectation-Maximization criteria centered clustering technique to impute the missing principles and further improve the precision of abnormality recognition.

The outcomes of this thesis have confirmed that the analysis objective was achieved. For reference details of the papers published by the author during the course of the thesis perform, the reader is directed to the Vita at the end of the thesis.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now