Text Based On Cosfuzzy Logic

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Krishna Mohan

Associate Professor

Department of CSE

UCEK, JNTUKakinada

[email protected]

Dr.MHM Krishna Prasad

Associate Professor

Department of CSE

UCEV, JNTUKakinada

[email protected]

ABSTRACT

Objective type of Examination evaluation is easy in Computer world. But the descriptive type of question evaluation is more difficult and there is no significant research has been taken place. In this paper I propose a new solution to the above problem with text classification using the new fuzzy logic named CosFuzzy Logic.

Document Clustering is a useful technique that organizes a large quantity of unordered text documents into a small number of meaningful and coherent clusters, thereby providing a basis for intuitive and informative navigation and browsing mechanisms. Partitional clustering algorithms have been recognized to be more suitable as opposed to the hierarchical clustering schemes for processing large datasets. A wide variety of distance functions and similarity measures have been used for clustering, such as squared Euclidean distance, cosine similarity, and relative entropy.

A Novel Fuzzy based feature clustering was proposed in which Gaussian distribution is used for fuzzy membership function. Clustering the data for four known classes, we used cosine similarity function along with fuzzy logic to calculate the similarity between two documents. We found that Experimental results show that our Cosfuzzy logic obtain better results.

General Terms

Feature Clustering, cosine similarity, Split distribution, fuzzy clustering.

Keywords

Dimensionality reduction, Skewness, feature extraction, fuzzy clustering, split normal distribution.

INTRODUCTION

Consider the problem of automatically classifying text documents for online descriptive evaluation whether they are relevant to the answer or not. This problem is of great practical importance given the massive volume of online text available through the World Wide Web, Internet online examination web portals.

Due to the rapid development of the Internet technology, the increasing volume of digital textual data become more tedious to manage and get the knowledge out of that textual data, therefore the importance of text classification has gained significant attention. There are so many challenges in Text classification. High dimensionality is one of the big challenges. And also each document (data point) having only a very small subset of them and representing multiple labels at the same time. Feature clustering is a powerful method to reduce the dimensionality of feature vectors for text classification. Many researchers worked on Feature Clustering for efficient text classification. Recently a Fuzzy based feature clustering was proposed in which Gaussian distribution is used for fuzzy membership function for clustering. In this paper we proposed new method for clustering the documents.

The process of dividing data elements into classes or clusters is called Data clustering. The items in the same cluster are as similar as possible, and items in different cluster are as dissimilar as possible. Depending on the nature of the data and the purpose for which clustering is being used, different measures of similarity may be used to place items into classes, where the similarity measure controls how the clusters are formed. Some examples of measures that can be used as in clustering include distance, connectivity, and intensity.

In hard clustering, data is divided into distinct clusters, where each data element belongs to exactly one cluster. In fuzzy clustering (also referred to as soft clustering), data elements can belong to more than one cluster, and associated with each element is a set of membership levels. These indicate the strength of the association between that data element and a particular cluster. Fuzzy clustering is a process of assigning these membership levels, and then using them to assign data elements to one or more clusters.

One of the most widely used fuzzy clustering algorithms is the Fuzzy C-Means (FCM) Algorithm (Bezdek 1981). The FCM algorithm attempts to partition a finite collection of n elements  into a collection of c fuzzy clusters with respect to some given criterion. Given a finite set of data, the algorithm returns a list of c cluster centres  and a partition matrix , where each element uij tells the degree to which element xi belongs to cluster cj . Like the k-means algorithm, the FCM aims to minimize an objective function. The standard function is:

which differs from the k-means objective function by the addition of the membership values uij and the fuzzier m. The fuzzier m determines the level of cluster fuzziness.A large m results in smaller memberships uij and hence, fuzzier clusters. In the limit m = 1, the memberships uij converge to 0 or 1, which implies a crisp partitioning. In the absence of experimentation or domain knowledge, m is commonly set to 2. The basic FCM Algorithm, given n data points (xi , . . . , xn) to be clustered, a number of c clusters with (c1 . . ., cn) the center of the clusters and m the level of cluster fuzziness with.

Text classification is different from conventional classification approaches in construction of text documents. The dimensionality for text data is very large in comparison to other forms of data sets. Also each document may contain only a few of the features from the entire pool of feature set. Recently, text data processing techniques have attracted more and more attention. The curse of dimensionality creates a great problem to the classification. Such high dimensionality is a severe problem for classification algorithms. To alleviate this problem, feature reduction approaches are applied before document classification tasks are performed. The main purpose of Feature Reduction is to reduce the classifiers computation load and to Increase data consistency. There are mainly two techniques for feature reduction, those are feature selection and feature extraction. The feature selection methods use technique like sampling which takes a subset of the features and the classifier only uses the subset instead of all the original features to perform the text classification task. The feature extraction methods convert the representation of the original documents to a new representation based on a smaller set of synthesized features. A well-known feature selection approach is based on Information Gain[08] measure defined by the amount of decreased uncertainty given a piece of information. However, there are some problems associated with the feature selection based methods. In these methods, only a subset of the words is used for the classification of text data, therefore useful information may be ignored.

The feature extraction techniques are used to translate the representations of the input documents to a new representation based on a smaller set of synthesized features. Feature clustering[1,3,2 and 10] is one of powerful techniques for feature extraction. Feature clustering is nothing but grouping of the words with a high degree of pair wise semantic relatedness into clusters and each word cluster contains the grouped features treated as a single feature. In this way, the dimensionality of the features can be drastically reduced. There are some feature clustering techniques suggested by Baker and McCallum [1] derived from the ‗distributional clustering‘ idea of Pereira et al. [4]. An Information Bottleneck approach was proposed by Tishby et al. [2, 7] and showed that feature clustering approaches are more effective than feature selection ones. A Divisive Clustering (DC) method was proposed by Dhillon et al. [3], which is an information and theoretic feature clustering approach and more effective than other feature clustering methods. In these methods, each new feature is generated by combining a subset of the original words. A word is assigned to a word‘s group or subset if the similarity of the word to the subset is greater than those to other subsets, despite the distinction is very small. All the feature selection and extraction methods mentioned above require the number of new features be specified in advance by the user. Later Jung-Yi Jiang, Ren-Jia Liou, and Shie-Jue Lee propose a fuzzy similarity-based self-constructing feature clustering algorithm [10], which is an incremental feature clustering approach to reduce the number of features for the text classification task. Words in the feature vector those are similar to each. other are grouped into the same cluster. Each cluster is characterized by a fuzzy membership function with statistical mean and deviation. If a word is not match to any existing cluster, a new cluster is built for this word. The Fuzzy membership with mean and deviation represents the similarity between a word and cluster. When all the words have been fed in, a required number of clusters are formed automatically. Now we take each cluster as a reduced feature. The extracted feature represents to a cluster is a weighted combination of the words contained in the cluster.

In this paper, we propose a new approach to classify Text based on CosFuzzy logic. Sometimes the data distribution may be skewed so the usage of the exponential approach to data distribution may give more weak results We are using efficient split Gaussian distribution function as fuzzy membership function. By this algorithm, the derived membership functions match closely with and describe properly the real distribution of the training data. Besides, the user need not specify the number of extracted features in advance, and trial-and-error for determining the appropriate number of extracted features can then be avoided.

Experiments on real world data sets show that our method can run faster and obtain better extracted features than other methods.

The remainder of this paper is organized as follows: Section 2 gives a brief background about feature reduction and the document clustering. Section 3 presents the proposed Cosfuzzy based feature clustering technique. Experimental results are presented in Section 4. Finally, we conclude this work in Section 5.

2. BACKGROUND AND RELATED WORK:

To process documents, the bag-of-words model [5] is usually used. Let di be a document and the set D= {d1,d2….dn } represent n documents. Let the word set W= {w1,w2…wm} be the feature set of the documents. Each document di, 1≤i≤n, can be represented as di=<di1,di2…dim}, where each dij denotes the number of occurrence of wj in document di. The feature reduction task is to find a new word W’= {w’1,w’2…w’m}, such

that W and W’ work equally well for all the desired properties with D. After feature reduction, each document di is converted to a new representation d’i=<d’i1,d’i2…d’ik} and the converted document set is D’= {d’1,d’2….d’n }. If k is very much smaller than m, computation cost can be drastically reduced.

2.1 High Dimensionality and the Dimension Reduction:

In text classification, the dimensionality of the feature vector is usually huge. Even more, there is the problem of Curse of Dimensionality, in which the large collection of features takes very much dimension in terms of execution time and storage requirements. This is considered as one of the problems of Vector Space Model (VSM) where all the features are represented as a vector of n - dimensional data. Here, n represents the total number of features of the document. This features set is huge and high dimensional.

There are two popular techniques for feature reduction: Feature Selection and Feature Extraction. In feature selection methods, a subset of the original word set is obtained to make the new feature set, which is further used for the text classification tasks with the use of Information Gain. In feature extraction methods, the original feature set is converted into a different reduced feature set by a projecting process. So, the number of features is reduced and overall system performance is improved.

Feature extraction approaches are more effective than feature selection techniques are more computationally expensive. Therefore, development of scalable and efficient feature extraction algorithms is highly demanded to deal with high-dimensional document feature sets. Both feature reduction approaches are applied before document classification tasks are performed.

2.2 Document Clustering:

in the traditional k-means algorithm. The objective of k-means is to minimize the Euclidean distance between objects of a cluster and that cluster’s centroid:

However, for data in a sparse and high-dimensional space, such as that in document clustering, cosine similarity is more widely used. It is also a popular similarity score in text mining and information retrieval. Particularly,similarity of two document vectors di and dj ,

Sim(di, dj), is defined as the cosine of the angle between

them. For unit vectors, this equals to their inner product:

Sim(di, dj) = cos(di, dj) = dtidj

Cosine measure is used in a variant of k-means called spherical k-means [3]. While k-means aims to minimize Euclidean distance, spherical k-means intends to maximize the cosine similarity between documents in a

cluster and that cluster’s centroid:

The major difference between Euclidean distance and cosine similarity, and therefore between k-means and spherical k-means, is that the former focuses on vector magnitudes, while the latter emphasizes on vector directions.Besides direct application in spherical k-means,cosine of document vectors is also widely used in many other document clustering methods as a core similarity measurement.

Feature clustering is an efficient approach for feature reduction [1], [2] which groups all features into some Clusters, where features in a cluster are similar to each Other. The feature clustering methods proposed in [1], [2], [3] and [11] are "hard" clustering methods, where each word of the original features belongs to exactly one word cluster. Therefore each word contributes to the synthesis of only one new feature. Each new feature is obtained by summing up the words belonging to one cluster. Let A be the matrix consisting of all the original Answers in documents form with m features and A’ be the matrix consisting of the converted Answer documents with new k features. The new feature set F’= {f’1,f’2…f’k} corresponds to a partition {F1, F2,…..Fk } of the original feature set F, i.e., Ft ∩ Fq= Ǿ, where 1 ≤ q; t ≤k and t ≠q. Note that a cluster corresponds to an element in the partition. Then, the tth feature value of the converted document d’i is calculated

(1)

as follows which is a linear sum of the feature values in Ft.The divisive information-theoretic feature clustering (DC) algorithm, proposed by Dhillon et al. [3] calculates the distributions of words over classes, P(C/fj ), 1 ≤ j ≤ m, where C ={ c1; c2; . . . . . . ; cp }, and uses Kullback-Leibler divergence to measure the dissimilarity between two distributions.

The distribution of a cluster Wt is calculated as follows:

(2)

The goal of DC is to minimize the following objective Function:

(3)

which takes the sum over all the k clusters, where k is Specified by the user in advance.

Later Jung-Yi Jiang, Ren-Jia Liou, and Shie-Jue Lee[10] propose a fuzzy similarity-based self-constructing feature clustering algorithm with fuzzy membership function as follows.

(4)

With mean and deviations are

3. OUR PROPOSED METHOD:

There are some difficulties with the clustering-based feature extraction methods described in the previous section. Firstly, they have to be given the value of k indicating the required number of clusters in advance to which all the patterns have to be assigned. Secondly, the computation time depends on the number of iterations, which may be expensively high. Thirdly the existing methods uses the Gaussian distribution as membership function in which the distribution may be skewed for some type of data which may give more weak results.

We propose an approach to deal with these difficulties. We develop an incremental word clustering procedure which uses a pre-specified threshold to determine the number of clusters automatically. Each word contains a similarity degree, between 0 and 1, to each cluster. Based on these degrees, a word with a larger degree will contribute a bigger weight than another one with a smaller degree to form a new feature corresponding to the cluster.

3.1 Preprocessing:

Initially each sentence is pre-processed and finds out the feature vector. The preprocessing of the document can be done by removing the invalid terms, removal of stop word and the process of word stemming as shown in figure 1. Next step is to assign the class labels to each document. The class labels are of two types. The class label is assigned to C1 if the Answer document is closer to the Original Answer document other side assigned to classes C2, C3 and C4 for not related. The closeness is decided based on Cosine similarity measure. Later each feature’s frequency is calculated by the frequency calculator which is applied to our Fuzzy similarity method .Finally, the conversion of the Feature Vector into the Reduced Feature Vector.

Fig .1

3.2 Document similarity:

Evaluating the similarity between two documents is an operation which lies at the heart of most text and language processing tasks. Once we consider `document' to mean not a file or placeholder for information content as dictated by the everyday use of the term, but rather a delineable unit of information (be it a paragraph, an article, a sentence or even a word, in the case of textually represented information), then the fact that evaluating document similarity is an essential operation to such tasks should become intuitively clear by examination of a few common examples of text or language-processing tasks. We used the cosine similarity with fuzzy logic for online descriptive examination evaluation.

3.3 CosFuzzy method- a new approach:

Suppose, we are given a original document d0 and document set D of n documents d1, d2; . . . ; dn, together with the feature vector W of m words w1; w2; . . . ; wm . We assign the class labels to those documents by comparing with the Original document d0 using cosine similarity function.

Where

All the documents which cosine value is more than 0.2 and less than 1.0 are sub divided in to three more classes as most similar, medium similar and less similar. The answer documents which cosine value is less than 0.2 would be classified as irrelevant document.

Then we construct one word pattern for each word in W. For word wi, its word pattern pxi is defined, similarly as in [1], by

p [10]

(5)

Where [10]

For . Note that dqi indicates the number of occurrences of wi in document dq, as described in Section 2. Also, is defined as

=

Our goal is to group the words in W into clusters, based on these word patterns. A cluster contains a certain number

of word patterns, and is characterized by the product of p special Gaussian functions. In this paper the following definition of split Gaussian l fuzzy membership function is used. We are taking the Split Gaussian function to reduce the skewness in the data distribution.

The split normal distribution arises from merging two opposite halves of two probability density functions (PDFs) of normal distributions in their common mode.

The PDF of the split normal distribution is given by

(6) [10]

Where µ is the mode and σ is the standard deviation. The fuzzy similarity of a word pattern x = <x1; x2; . . . ; xp> to cluster G is defined by the following membership function:

(7)

Where mode (location, real)

— left-hand-side standard deviation (scale, real)

— right-hand-side standard deviation (scale, real)

Reduction of Skew in distribution and Estimation of parameters:

Bekkerman[2] proposes to estimate the parameters using maximum likelihood method. He shows that the likelihood function can be expressed in an intensive form, in which the scale parameters σ1 and σ2 are a function of the location parameter μ. The likelihood in its intensive form is:

[12]

and has to be maximized numerically with respect to a single parameter μ only.

Given the maximum likelihood estimator \hat{\mu} the other parameters take values [12]:

Where N is the number of observations which is equal to the length of the cluster.

3.4 Implementation details:

To process documents, the bag-of-words model is commonly used. Let D = {d1; d2; . . . ; dn } be a document set of n documents, where d1, d2; . . . ; dn are individual documents, and each document belongs to none of the classes in the set .

Let do be the original answer to be compared.

First compare each document di……..dn with do using the cosine similarity. Take threshold of 0.5 . Process all the documents which are more than 0.5. Assign the class labels c1, c2, c3 for each document as per the above discussion in section 3.3.

Let the word set W = {w1; w2; . . . ; wm} be the feature vector of the document set. Each document

di, 1 ≤ i ≤ n, is represented as di = <di1; di2; . . . ; dim>, where each dij denotes the number of occurrences of wj in the ith document. The feature reduction

task is to find a new word set W0 = {w0 1; w0 ,2; . . . ; w0

k },k << m, such that W and W0 work equally well for all the desired properties with D. After feature reduction, each document di is converted into a new representation

d0 = <d0i1; d0 i2; . . . ; d0ik> and the converted document set is D0 ={d0 1; d0 2; . . . ; d0 n}. If k is much smaller than m, computation cost with subsequent operations on D0 can be drastically reduced.

In the self-constructing feature clustering algorithm, clusters are generated, with none at the beginning, incrementally from the training data set based on fuzzy similarity. One feature pattern is considered in each time. The fuzzy similarity between the input feature and each existing feature cluster is calculated. If the input feature is similar enough to none of the existing clusters, a new cluster for the feature is created and the corresponding membership functions should be initialized. Otherwise, the input feature is combined to the existing cluster to which it is most similar, and the corresponding membership functions of that cluster should be updated.

Let k be the number of currently existing clusters. The

Clusters are G1, G2; . . .; Gk, respectively. Each cluster Gj has mode µj = <µj1; µj2; . . .; µjp> and deviation σj= < σj1, σ1j2……… σjp >. Let Sj be the size of cluster Gj. Initially, we have k = 0. So, no clusters exist at the beginning. For each word pattern xi = <xi1; xi2; . . . ; xip>; 1≤i ≤ m, we calculate, according to (6,7), the similarity of xi to each existing clusters, i.e.,

[10]

For 1 ≤ j ≤k. We say that xi passes the similarity test on Cluster Gj if

(8)

Where ρ, 0≤ρ≤1, is a predefined threshold.

If the user intends to have larger clusters, then he/she can give a smaller threshold. Otherwise, a bigger threshold can be given. As the threshold increases, the number of clusters also increases. Note that, as usual, the power in (4) its value has an effect on the number of clusters obtained.

A larger value will make the boundaries of the Gaussian function sharper, and more clusters will be obtained for a given threshold. On the contrary, a smaller value will make the boundaries of the Gaussian function Smoother and fewer clusters will be obtained instead.

Two cases may occur. First, there are no existing fuzzy

clusters on which xi has passed the similarity test. For this case, we assume that xi is not similar enough to any existing cluster and a new cluster Gh, h = k + 1, is created with

Where is a user-defined constant vector. Note that the new cluster Gh contains only one member, the word pattern xi, at this point. Estimating the deviation of a cluster by (6, 7) is impossible, or inaccurate, if the cluster contains few members. In particular, the deviation of a new cluster is 0, since it contains only one member. We cannot use zero deviation in the calculation of fuzzy similarities. Therefore; we initialize the deviation of a newly created cluster by σ0, as indicated in (3). Of course, the number of clusters is increased by 1 and the size of cluster Gh, Sh, should be initialized, i.e.,

Second, if there are existing clusters on which xi has passed the similarity test, let cluster Gt be the cluster with the largest membership degree, i.e.,

(9)

In this case, we regard xi to be most similar to cluster Gt, and the mode and standard deviation will be updated of that cluster.

3.5 Algorithm

Input: Set of Documents d1,d2…dn

OutPut: Set of Classified documents and build a classification model

Process:

Identify the terms

Remove the stop words and invalid terms

Produce the feature vector.

build the cosine matrix with the document d0.

compare the document with …….

take the all the documents for further process which cosine similarity is more than 0.5

categorise all the documents as per the cosine similarity value as

Build the word pattern [10]

Initialize the mean = 0.5, no of min word pattern in cluster = 2; and variance= 0.5, initial fuzzy similarity β= 0.6

Loop (1 to end of all the word patterns)

Calculate the mean and variance of the initial cluster

Calculate the Fuzzy similarity using fuzzy similarity

Function

If (the fuzzy similarity is < β) {

Add one more word pattern to the cluster

} else {

Create a new cluster

Add the word pattern to the new cluster

}

end if;

end loop;

Return with the created k clusters;

End procedure

3.7 Text Classification:

Given a set D of training documents, text classification can be done as follows: We specify the similarity threshold _ for (16), and apply our clustering algorithm. Assume that k clusters are obtained for the words in the feature vector W. Then we find the weighting matrix T and convert D to D0. Using D0 as training data, a classifier based on Support vector machines (SVM) is built. Note that any classifying technique other than SVM can be applied. Joachims proposed that SVM is works well and performs better than other methods for text categorization.

Support vector machines (SVM) are a group of supervised learning methods that can be applied to classification or regression. A support vector machine (SVM) is a technique in computer science for a set of related machine learning methods that are used to process data and recognize the patterns, used for classification. It is known that support vector machines (SVM) are capable of effectively processing feature vectors of some 10 000 dimensions, given that these are sparse. Several authors have shown, that support vector machines provide a fast and effective means for learning text classifiers from examples. Documents of a given topic could be identified with high accuracy).Topic identification with SVM implies a kind of semantic space in the sense that the learned hyper plane separates those documents in the input space, which belong to different topics.

The standard SVM takes a set of input data and predicts, for each given input, which of two possible classes forms the input, making the SVM a non-probabilistic binary linear classifier. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm creates a model that assigns new examples into one category or the other. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.

SVM is a kernel method, which finds the maximum margin hyper plane in feature space separating the images of the training patterns into two groups. To make the method more flexible and robust, some patterns need not be correctly classified by the hyper plane, but the misclassified patterns should be penalized. Therefore, slack variables i are introduced to account for misclassifications. The objective function and constraints of the classification problem can be formulated as:

[10]

where l is the number of training patterns, C is a parameter, which gives a tradeoff between maximum margin and classification error, and yi, being +1 or -1, is the target label of pattern xi. Note that o: X->F is a mapping from the input space to the feature space F, where patterns are more easily separated, and is the hyper plane to be derived with w, and b being weight vector and offset, respectively.

An SVM described above can only separate apart two classes, yi = +1 and yi =-1. We follow the idea in [36] to construct an SVM-based classifier.

For p classes, we create p SVMs, one SVM for each class. For the SVM of class cv, the training patterns of class cv are treated as having yi =+1, and the training patterns of the other classes are treated as having yi =-1. The classifier is then the aggregation of these SVMs. Now we are ready for classifying unknown documents. Suppose, d is an unknown document. We first convert d to d’ by

Then we feed do to the classifier. We get p values, one from each SVM. Then d belongs to those classes with 1, appearing at the outputs of their corresponding SVMs. For example, consider a case of three classes’ c1, c2, and c3. If the three SVMs output 1, -1, and 1, respectively, then the predicted classes will be c1 and c3 for this document. If the three SVMs output -1, 1, and 1, respectively, the predicted classes will be c2 and c3

4. EXPERIMENTAL RESULT:

To compare classification effectiveness of each method, we adopt the performance measures in terms precision. We use the following notation. Given a (test) dataset, we denote by m the number of examples, and c as the number of classes. f(i, j) represents the actual probability of example i to be of class j. We assume that f(i, j) always takes values in {0,1} and, strictly, it is not a probability but an indicator function. With we denote the number of examples of class j.

We denote by p(j) the prior probability of class j, i.e., p(j) = mj/m.

Given a classifier, p(i, j) represents the estimated probability of example i to be of class j taking values in [0,1]. is 1 iff j is the predicted class for i obtained from p(i, j) using a given threshold or decision rule (especially in multiclass problems) . Otherwise, is 0.We will omit in what follows.

Accuracy: (Acc) This is the most common and simplest measure to evaluate a classifier. It is just defined as the degree of right predictions of a model (or conversely, the percentage of misclassification errors).[13]

• Mean F-measure: (MFM) This measure has been widely employed in information retrieval [7].

Where

Where j is the index of the class considered as "positive". Finally, mean F-measure is defined as follows:

We compare CosFuzzy Method with other methods like only Fuzzy method and Cosine methods.

We consider Four categories of documents. They are Pure theoretical subjects. Semi theoretical subjects(Theory subjects with diagrams),Formula based subjects(Like Physics and Chemistry) and Mathematical subjects. The experimental results show that our method CosFuzzy works well & obtains satisfactory results.

Fig .2

5. CONCLUSION:

In this paper we have presented a new approach to evaluate the descriptive type answers using our CosFuzzy Method. It is the best new Fuzzy clustering algorithm for text classification which uses a different fuzzy membership function that solves the problem of descriptive type of Exams evaluation.

6. ACKNOWLEDGMENTS:

Our thanks to the experts who have contributed towards development of this paper.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now