A Novel Rulebased Classifier For Ids

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract: Signature detection contains searching network traffic for a series of bytes or packet sequences known to be malicious. A key advantage of this detection method is that signatures are easy to develop and understand if you know what network behavior you're trying to identify. Signature engines are also prone to false positives since they are commonly based on regular expressions and string matching. We have used a standard KDDCUP99 dataset for evaluating the model. In this paper we propose a novel rule based classifier.

Keywords: Rules,

Introduction

A rule-based classifier is a method which is used for classifying the records by a collection of "if ... then ..." rules. For this purpose we first use training data for the identification of rules, and test their significance through another data. There are several mechanisms to identify the intrusion detection in a Networking environment and a specified significance. An appropriate collection of "if … then …" rules can identify the required attacks with higher degree of accuracy. In this work, the sole idea is to identify highly appropriate rules for identifying the Intrusion Detection in the Networking environment, and hence the Network Intrusion Detection System (NIDS).

3.2 Problem statement

The objective is to identify a novel rule based classifier whose performance is good enough to detect the threats. For this the approach used is to design two-level architecture. This architecture contains first layer which uses selected training data which is processed and gives out rule based classifiers. In the second layer these classifiers are applied on the data to test the performance of these classifying rules and their significance.

3.3 Approach

In this chapter a two-tier network intrusion detection system is discussed. The two tier architecture of the proposed Network Intrusion Detection System (NIDS) is shown in figure 3.1. The framework includes the pre-processing of the data in detecting the intruder attacks, which includes:

Online data from a source system is stored in to a file

Check this data file to find missing values and noise, and appropriate correction to it.

Reduce the data size by sampling, and selecting relevant features for processing.

This preprocessing is done in the first layer. After preprocessing the data, the classification model is trained using a given classification data mining technique. The Induction Decision Tree (IDT) model has been used for classification. The training of the IDT classifier is done in first tier, and classification is done in second tier. To summarize, tier-1 is for the training phase, and the tier-2 is the testing phase. The description of each processing step in each tier is explained below.

3.4 Methodology

In order to come out with an effective IDS the approach uses a two-tier architecture which performs training and testing with the relevant data. The detailed description of each layer is as follows.

Figure 3.1 Two-tier Architecture for NIDS3.4.1 Tier One Task Description

The work done to achieve the intrusions detection in the first layer training is performed. Read the chosen dataset, applying preprocessing techniques such as selecting the feature subset using entropy method. Then apply the classifier, which is Induction decision Tree to classify the data into either normal records or attack type records such as DoS, U2R, R2L, and Probe. The detailed description of how to process the data is explained in the coming sections. The main idea of this thesis is to build a classification model using IDT for normal records, and attacking records based on labelled training data, and using it to classify each new unseen record. The rules are generated from the IDT for classification. Instead of generating rule from training data, the rules can be generated from decision tree. The classification time can be reduced instead of using IDT Model for classification. In task classification decision trees look into the steps that are taken to arrive at a classification. Every decision tree begins with a root node which is considered to be the parent of every other node. Evaluation of an attribute in the data is done at each node and selects the best path.

U2R U2R

U2R

Read Dataset

(KDDCUP99)

Select Feature Subset using Entropy method

Induction Decision Tree Classifier

Normal

Probe

DoS

R2L RR2L

For Each test record

Rule Based Classifier

Normal

Probe

DoS

R2L

Tier – 1

(training)

Tier – 2

(testing)

3.4.2 Tier Two Task Description

This layer includes the testing phase of the classifier. A new rule based classifier is discussed below. Rule based Classification Model [54, 55] must be able to handle skewed (imbalanced) class distributions. This is also called supervised classification technique[56]because it requires knowledge of both normal and attack classes.

Rule is termed as the representation of information or bits of knowledge. A new rule-based classifier [57] uses a set of IF-THEN rules for classification. An IF-THEN rule is an expression of the form

IF condition THEN conclusion.

An example is rule R1,

R1: IF age = youth AND student = yes THEN buys_computer = yes.

A rule based classifier can be built by extracting a rule set that shows relationship between class label and the dataset attributes. Each classification rule is of the following form

R: (condition) => y.

Here the condition is called the rule antecedent, which is a conjunction of the attribute test condition, and y is called the rule consequent, and called the class label. A rule set can comprises of multiple rules RS = {R1, R2, . . . , Rn}. Coverage and accuracy are used to assess a rule R.Assume a tuple, X, from a class labeled data set, D. Let ncover be the number of tuples covered by R; ncorrect is the number of tuples correctly classified by R, and |D| be the number of tuples in D. The coverage and accuracy of R can be defined as

Coverage (R) = ncover / |D| (3.2)

Accuracy (R) = ncorrect / ncover (3.3)

The percentage of tuples covered by the rule gives the rule’s coverage. To assess the accuracy of a rule, consider the tuples it covers and find what percentage of them the rule can classify correctly.

The algorithms, The Novel Rule Based algorithm, Novel_Rule Algorithm, Learn_One_Rule Algorithm, Growing phase Algorithm, Split Uncertain Algorithm were applied in this model. These algorithms were used to generate rules, based on that the given data were classified. The functionality of proposed algorithm is described as follows.

NRBC Algorithm 1

Input: Training Dataset D

Output: Intrusion Detection Model

Procedure:

Calculate the information gain for each attributes Ai = {A1, A2,…,An} from the training data D using equation (3).

Choose an attribute Ai from the training data D with the maximum information gain value.

Split the training data D into sub-datasets {D1,D2,…,Dn} depending on the attribute values of Ai.

Calculate the prior P(Cj) and conditional probabilities P(Aij|Cj) of each sub-dataset Di.

Classify the examples of each sub-dataset Di with their respective prior and conditional probabilities.

If any example of sub-dataset Di is misclassified then again calculate the information gain of attributes of sub-dataset Di, choose the best attribute Ai with maximum information gain value from sub-dataset Di, split the sub-dataset Diinto sub-sub-datasetsDijand again calculate the prior and conditional probabilities for each sub-sub-dataset Dij. Finally, classify the examples of sub-sub-datasets using their respective prior and conditional probabilities.

Continue this process until all the examples of sub/sub-sub-datasets are correctly classified.

Figure 3.3 Novel Rule-Based Classification (NRBC) Algorithm

3.5 Process

The processing of the two tier architecture is shown in this section. The Novel Rule Based algorithm is shown in Algorithm 1. It uses the sequential covering approach to extract rules from the data. The algorithm extracts the rules for one class at a time for a data set. Let (y1, y2, ...,yn) be the ordered classes according to their frequencies, where y1 is the least frequent class and ynis the most frequent class.

During the i’th iteration, instances that belong to y1are labeled as positive examples, while those that belong to other classes are labeled as negative examples. The procedure for Novel Rule-Based classification algorithm is shown in figure 3.3.

A rule is desirable if it covers most of the positive examples and none of the negative examples. The Rule-based learning algorithm is based on the RIPPER algorithm [58], which was introduced by Cohen and considered to be one of the most commonly used rule based algorithm in practice, which is given in figure 3.4.

The Learn_One_Rule() procedure, shown in figure 3.5, is the key function of the Rule-based algorithm. It generates the best rule for the current class, given the current set of uncertain training tuples. The Learn_One_Rule() includes two phases:

Growing rules, and

Pruning rules.

In the first phase, growing rules, in more detail, while the other pruning rules is similar to regular rule-based classifier [59], thus will not be elaborated. After generating a rule, all the positive and negative examples covered by the rule are eliminated. The rule is then added into the rule set as long as it does not violate the stopping condition, which is based on the minimum description length (DL) principle. Rule also performs additional optimization steps to determine whether some of the existing rules in the rule set can be replaced by better alternative rules.

The process of growing rules, Grow(), is shown in figure 3.6. The basic strategy is as follows:

1. It starts with an initial rule: { }->y, where the left hand side is an empty set and the right hand side contains the target class. The rule has poor quality because it covers all the examples in the training set. New conjuncts will subsequently be added to improve the rule’s quality.

2. The probabilistic information gain is used as a measure to an antecedent of the rule (steps 3-4). The details of how to compute probabilistic information gain for uncertain data will be shown in the next section.

3. If an instance is covered by the rule, Function splitUncertain(), is invoked (steps 5-9). Function splitUncertain() returns part of the instance that is covered by the rule.Then, the part of the instance that is covered by the rule is removed from the dataset, and the rule growing process continues, until either all the data are covered or all the attributes have been used as antecedents.

Function splitUncertain() is shown in figure 3.7. As the data is uncertain, a rule can partly cover an instance. Function splitUncertain() computes what proportion of the instances is covered by a rule based on the uncertain attribute interval and probabilistic distribution function. Rule employs a general-to-specific strategy to grow a rule and the probabilistic information gain measure to choose the best conjunct to be added into the rule antecedent. The new rule is then pruned based on its performance on the validation set. The following metric has been used for rule pruning.

The probabilistic prune for a rule R is

ProbPrune(R,p,n)= (PC(p) − PC(n))/(PC(p) + PC(n))

Here PC(p) and PC(n) is the probabilistic cardinality of positive and negative instances covered by the rule. This metric is monotonically related to the rule’s accuracy on the validation set. If the metric improves after pruning, then the conjunct is removed. Like RIPPER, Rule starts with the most recently added conjunct when considering pruning. Conjuncts are pruned one at a time as long as this results in an improvement.

For uncertain data, a rule may partly cover instances, therefore, the number of positive and negative instances covered by a rule are no longer integers but real values. The second rule is a default rule. Like traditional rule-based classifier, NRBC also generate a default rule, which can be applied to instances which do not match any rules in the rule set. This default rule has accuracy around 91%.

To classify a test sample in the dataset, the algorithm estimates the likelihood that ei is in each class. The product of the conditional probabilities with prior probability for each attribute gives the probability that ei is in a class. The posterior probability P(Cj| ei) is then found for each class which classifies with the highest posterior probability for the given data.

The algorithm will continue this process until all the examples of sub-datasets or sub-sub-datasets are correctly classified. When the algorithm correctly classifies all the examples of all sub/sub-sub-datasets, then the algorithm terminates and the prior and conditional probabilities for each sub/sub-sub-datasets are preserved for future classification of unseen examples.

3.5.3 Applying

The work done to achieve the intrusions detection in the first layer is reading the training data, selecting feature subset, and training new rule-based method for classification. The KDDCUP99 [53] DARPA dataset from MIT, U. S. A is chosen for evaluation of the model. This dataset is divided into two parts. One part of the samples is called training samples and the other part of samples is the test samples. From the training dataset the features subset is selected using info-gain statistical measure.

3.5.3.1 Feature Selection

Feature selection from the available data is vital for the effectiveness of the methods employed. A set of features whose values in normal audit records differ significantly, from the values in intrusion records to be chosen for having a good detection performance. Data mining algorithms work more effectively if the domain information of higher priority attributes is available. Here the entropy (information gain) method has been used for feature selection. The method of selecting an attribute follows, a recursive divide and conquer strategy. The kddcup’99 dataset features are listed in the table 3.1. The dataset has 41 features and one class label attribute. The attribute names along with their data types shown in the table 3.1. The procedure for features selection is explained below.

Give a training data D = {t1,…,tn} where ti = {ti1,…,tin} and the training data contains the following attributes {A1,A2,…,An} and each attribute Ai contains the following attribute values {Ai1, Ai2,…,Ain}. The attribute values can be discrete or continuous. The training data D which contains a set of classes C = {C1, C2,…,Ck}. Each tuple in the training data D has a particular class Cj. The algorithm calculates the information gain for each attribute {A1, A2,…,An} in the training data D. The attributes are sorted in descending order of the information gain. The formula for finding information gain based on entropy (information of an attribute) is given by eq. 3.1.

wherepkis the probability that an arbitrary tuple in D belongs to class Ck.

The algorithm chooses one of the best attributes Ai among the attributes {A1, A2,…,An} in the training data D with highest information gain value, and split the training data D into sub-datasets {D1, D2,…,Dn} depending on the chosen attribute values of Aj.

Using the equation 3.1, the attributes along with their entropy is listed in table 3.2. The flowchart for features selection is shown in the figure 3.2. The procedure for the features selection explained above is shown in the flowchart. Most previous methods for feature selection emphasized only the reduction of high dimensionality of the feature space. But in cases where many features are highly redundant with each other, it must utilize other means, for example, more complex dependence models such as Bayesian network classifiers. In this work, information gain and divergence-based feature selection methods have been used for statistical machine rule-based learning method without relying on more complex dependence models.

This feature selection method strives to reduce redundancy between features while maintaining information gain in selecting appropriate features for classification. Entropy is one of the greedy feature selection methods, and conventional information gain which is commonly used in feature selection for classification models. Moreover, the feature selection method sometimes produces more improvements of conventional machine learning algorithms over support vector machines which are known to give the best classification accuracy. The features with highest entropy have been selected. The selected features were shown in table 3.3.

3.5.3.2 Pruning Unnecessary Conditions

If there are conditions of that rule that are inconsequential to the outcome, discard them thus simplifying the rule (and thus improving efficiency). This is accomplished by proving that the outcome is independent of the given condition. Events A and B are independent if the probability of event B does not change given that event A occurs. Using Bayes Rule:

P (B|A) = P (B)

This states that the probability of event B given that event A occurs is equal to the probability that event B occurs by itself. If this holds true, then event A does not effect whether or not event B occurs. If A is a condition and B is a result, then A can be discarded without affecting the rule.

3.5.3.3 Pruning Unnecessary Rules

If two or more rules share the same end result, you may be able to replace them with a rule that fires in the event that no other rule is fired:

If (no other rule fires) then (execute these common actions)

Figure 3.2 Flow chart for Features Selection using Entropy Method

Read Training Samples

Sub Data-set D1

Sub Data-set D2

Sub Data-set Dn

. .

Terminating conditions

Stop

No

Calculate Entropy for each Attribute

Select an Attribute with max info gain

Yes

Partition the dataset

If there is more than one such group of rules, replace only one group. Which one is determined by some heuristic tie-breaker. Two such tiebreakers follow:

Replace the larger of the two groups. If group A has six rules which share a common result and group B only has five, replace the larger group A with will eliminate more rules and simplify the rule base the most. Replace the group with the highest average number of rule conditions. While more rules may remain, the rules that remain will be simpler as they have fewer conditions.

3.5.3.4 Rule Prediction

Once the rules are learned from a dataset, they can be used for predicting class types of unseen data. Like a traditional rule classifier, each rule of Rule_set is in the form of, "IF Conditions THEN Class = Ci".

Because each instance Iican be covered by several rules, a vector can be generated for each instance (P(Ii,C)=(P(Ii, C1),P(Ii, C2), ..., P(Ii, Cn))r in which P(Ij, Cj) denotes the probability for an instance to be in class Cj. It can be called as Class Probability Vector (CPV). As an uncertain data instance can be partly covered by a rule, and denoted as the degree an instance I covered by a ruleRjby P (I, Rj).

When P(I, Rj) = 1, Rjfully covers instance Ii; when P(Ii,Rj) = 0, Rjdoes not cover Ii; and when 0 < P(Ii,Rj) <1, Rjpartially covers Ii.

An uncertain instance may be covered or partially covered by more than one rule. We allows a test instance to trigger all relevant rules. The w(Ii,Rk) can be used to denote the weight of an instance Iicovered by the kth rule Rk. The weight of an instance Iicovered by different rules is as follows:

w(Ii,R1) = Ii.w∗ P(Ii, R1)

w(Ii,R2) = ( Ii.w - w(Ii, R1 )) * P (Ii, R2 )

...

w(Ii,Rn)=(Ii.w–

For the first rule R1 in the rule set, w(Ii, R1) should be the weight of the instance, Ii.w, times the degree the instance covered by the rule, P(Ii, R1). For the second rule R2, w(Ii, R2) is the remained probability cardinality of instance Ii, which is (Ii.w − w(Ii, R1)), times the rule coverage, P(Ii, R2). Similarly, w(Ii, Rn) should be the remained probability cardinality of instance Ii, which is Ii.w – , times P(Ii, Rn). Suppose an instance Ii is covered by m rules, then it class probability vector P(Ii,C) is computed as follows:

, where P (Rk, C) is a vector

P(Rk,C) = (P(Rk,C1),P(Rk, C2),...,P(Rk,Cn))T and denotes the class distribution of the instances covered by rule Rk. P(Rk, Ci) is computed as the fraction of the probabilistic cardinality of instances in class Cicovered by the rule over the overall probabilistic cardinality of instances covered by the rule. After computing the CPV for instance Ii, the instance will be predicted to be of class Cj, which has the largest probability in the class probability vector. This prediction procedure is different from a traditional rule based classifier.

When predicting the class type for an instance, a traditional rule based classifier such as RIPPER usually predicts with the first rule in the rule set that covers the instance. As an uncertain data instance can be fully or partially covered by multiple rules, the first rule in the rule set may not be the rule that covers it best. New Rule will use all the relevant rules to compute the probability for the instance to be in each class and predict the instance to be the class with the highest probability.

3.6 Application of Two-tier Architecture Results

Using the above procedures the rule set is generated for each class type. The DARPA KDDCUP99 dataset has been used for evaluation of the RBC model. The rules for each class are given below.

3.6.1 U2R Class Rule-set

(service = telnet) and (src_bytes>= 135) THEN class=u2r

(service = ftp_data) and (dst_bytes>= 2072) and (dst_bytes<= 5928) THEN class = u2r

(service = telnet) and (dst_bytes>= 183) and (dst_bytes<= 233) THEN class=u2r

3.6.2 R2L Class Rule-set

(service = ftp_data) and (flag = SF) THEN class=r2l

(service = telnet) and (src_bytes>= 112) THEN class=r2l

(service = ftp) and (src_bytes<= 119) and (src_bytes>= 36) THEN class=r2l

(count<= 1) and (service = login) THEN class=r2l

(count<= 1) and (dst_bytes>= 2551) and (src_bytes<= 51) THEN class=r2l

3.6.3 Probe Class Rule-set

(count>= 309) and (src_bytes<= 6) THEN class=probe

(service = eco_i) and (src_bytes<= 20) THEN class=probe

(flag = REJ) and (count >= 3) THEN class=probe

(flag = SH) THEN class=probe

(src_bytes>= 1) and (src_bytes<= 6) and (count >= 3) THEN class=probe.

(flag = RSTR) and (src_bytes<= 0) THEN class=probe

3.6.4 Normal Class Rule-set

(dst_bytes>= 32) and (src_bytes<= 12804) THEN class=normal.

(count<= 1) and (service = smtp) THEN class=normal.

(count<= 2) and (src_bytes<= 1476) and (src_bytes>= 6) THEN class=normal.

(flag = REJ) THEN class=normal

3.7 Performance Evaluation of NRBC

The performance results for RBC model are shown in Table 3.4. The model is tested on DARPA kddcup99 dataset on 22,213 test records. The ten-fold cross fold validation is used to classify the records.

3.1.1 Purpose of Cross Validation

Suppose a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible.

Then take an independent sample of validation data from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. This is called overfitting, and is particularly likely to happen when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to predict the fit of a model to a hypothetical validation set when an explicit validation set is not available.

3.7.2 Cross-validation

Sometimes it is called rotation estimation. It is a technique for assessing how the results of a statistical analysis will generalize to an independent data set. It is mainly used in settings where the goal is prediction, and one wants to estimate how accurately a predictive model will perform in practice.

One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds. Here we have implemented 10-fold Cross-validation method. The algorithm for 10-fold cross validation is figure 3.8.

In the experiment, ten-fold cross validation has been used. Data is split into 10 approximately equal partitions; each one is used in turn for testing while the rest is used for training, that is, 9/10 of data is used for training and 1/10 for testing.

The whole procedure is repeated 10 times, and the overall accuracy rate is counted as the average of accuracy rates on each partition. The procedure for cross-validation is shown in figure 3.8. The accuracy of the RBC is 99.7933% that is 22213 records correctly classified out of 22259 records and error rate is 0.2067% that is only 46 records is incorrectly classified against the same number of records.

Cross Validation Procedure

Divide all examples into N disjoint subsets,

E = E1, E2, ..., EN

For each i = 1, ..., N do

Test set = Ei

Training set = E - Ei

Compute decision tree using Training set

Determine performance accuracy Pi using Test set

Compute N-fold cross-validation estimate of performance

= (P1 + P2 + ... + PN)/N

Figure 3.8 Cross-validation procedure

3.7.3 Receiver Operating Characteristic (ROC) Graphs

The ROCapproach to evaluating predictive ability of classifiers provides an intuitive and convenient way of dealing with asymmetric costs of the two types of errors. ROCs are plotted in coordinates spanned by the rates of false positive and truepositive classifications.

True positive (TP) refers to the positive tuples that were correctly labeled by the classifier, True negatives (TN): refers to the negative tuples that were correctly labeled by the classifier, False positives (FP): are the negative tuples that were incorrectly labeled.

There are various parameters to evaluate the performance of the classification model. Some of the parameters are defined below.

True positive rate (TPR) =

False positive rate (FPR) =

Confusion Matrix

The other parameter is the confusion matrix is more commonly named contingency table. In DARPA KDDCUP there are four class types, and therefore it need a 4x4 confusion matrix, the matrix could be arbitrarily large. The number of correctly classified instances is the sum of diagonals in the matrix; all others are incorrectly classified.

The True Positive (TP) rate (TPR) is the proportion of examples which were classified as class x, among all examples which truly have class x, i.e. how much part of the class was captured. It is equivalent to recall or sensitivity.

The False Positive (FP) rate (FPR) is the proportion of examples which were classified as class x, but belong to a different class, among all examples which are not of class x. In the matrix, this is the column sum of class x minus the diagonal element, divided by the rows sums of all other classes.

The Precision for a class is the number of true positives (i.e. the number of items correctly labeled as belonging to the positive class) divided by the total number of elements labeled as belonging to the positive class.

F-Measure (Fv) combines the true positive rate (recall) and precision Pr into a single utility function which is defined as -weighted harmonic mean.

F= 2 .

Using above all, the results of the RBC model is shown in Tables 3.4 and 3.5.

Table 3.4 RBC Classification results in Confusion Matrix

Class

Normal

Dos

Probe

R2l

U2r

Normal

13437

4

0

0

0

Dos

3

7759

0

2

3

Probe

5

7

858

0

0

R2l

0

6

0

116

6

U2r

0

5

2

3

43

This model has been applied on KDDCUP’99 Dataset with sample size of 22,259 records. The steps as per the framework have been followed and applied and intermediate results were not shown. The final results for new rule based model for accuracy shown in confusion matrix tables 3.4 and 3.5 as summary, and graphs of ROC curves shown in figures 3.9 and 3.10.

Class

TP

Rate

FP Rate

Precision

Recall

F_Measure

ROCArea

Normal

1

0.001

0.999

1

1

1

DoS

0.999

0.002

0.997

0.999

0.998

0.999

Probe

0.986

0

0.998

0.986

0.992

0.992

R2l

0.906

0

0.959

0.906

0.932

0.986

U2r

0.811

0

0.827

0.811

0.819

0.933

Table 3.5 RBC Classification Detailed Accuracy by Class

These results show that the accuracy for normal classes normal and DoS type records around 0.99 close to one, where as for other two classes it drops to 0.98 and 0.90 because records are only a few. The area under ROC is one for ideal model where the model predicts with 100% accuracy. The results in the confusion matrices are shown in tables 3.4 and 3.5, the diagonal column indicates the number of records correctly predicted for each class type, and the remaining entries in that matrix are wrongly predicted records.

Figure 3.9 ROC Curve for R2L Class

Figure 3.10 ROC Curve for U2R Class

Summary

The model performance is around 99% with reduced dataset features. The reduction in features results in reduced model learning time, execution time in prediction, and reduces the space requirement.

In this model, the proposed novel rule-based algorithm is used for classifying and predicting kddcup’99, dataset records for classification. The proposed new approaches are deriving optimal rules from data, pruning and optimizing rules, and class prediction for data records. This model follows the new paradigm of directly mining kddcup dataset.

The scope of future work include developing uncertain data mining techniques for various applications, including sequential pattern mining, association mining, spatial data mining and web mining, where data is usually uncertain.

Referrences



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now