The Background Of The Cloud Computing Technology

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract— Cloud computing is a technology that uses the internet and central remote servers to keep up data and applications. As becomes more mature, many organizations and individuals are attracted in storing more number of accessible data e.g. personal data files, company related information and personal information in the cloud area. This technology allows for much more efficient computing by centralizing storage, memory, processing and bandwidth. Typically, the cloud servers also need to maintain a keyword search feature for these encrypted files. Traditional searchable encryption schemes typically only support exact keyword matches. However, users sometimes have types or use slightly different formats e.g. "multi - keyword" versus "multi keyword". Recently, some researchers propose using wildcard and gram based approach to provide fuzzy keyword search. They also propose a solution for fuzzy multi-keyword search. Incremental updates can be easily done using our solution. Our evaluation results show that our approach is more cost-effective in terms of storage size, construction time and processing speed. Our search time is compare to better than the wildcard based and gram based technique for fuzzy multi-keyword queries where many encrypted files are returned using single-word queries for approaches that do not support fuzzy multi-keyword queries. This paper we concentrates only privacy-aware bedtree based approach to support fuzzy multi-keyword search feature.

Keywords- fuzzy multi-keyword search, encrypted data, fuzzy keyword search, co-occurrence probability, bedtree based solution

I INTRODUCTION

As cloud computing technology becomes more mature, organizations and individuals wish to store more sensitive data e.g. personal health records, emails, customer related documents in the cloud. By storing such data in the cloud, the data owners do not have to worry about data storage and maintenance. Furthermore, they can enjoy the increased data availability provided by the cloud servers. However, data owners may not trust the cloud servers. Hence, data owners often encrypt the outsourced data for data privacy and mitigation against unauthorized accesses. Unfortunately, data encryption, if not done appropriately, may reduce the effectiveness of data utilization. In addition, data owners often want to share their published data with many users, some of whom they may not anticipate e.g. new subscribers when the data is outsourced. Furthermore, each user may be interested in only some but not all of the data files published by a data publisher. Typically, a user retrieves files of interest to him/her via keyword search instead of retrieving all the encrypted files back which may not be feasible in cloud computing scenarios. Such keyword-based search technique has been widely used in our daily life e.g. Google plaintext keyword search . In addition, the data encryption solution should provide keyword privacy since keywords usually contain important information about the data files e.g. customers’ PAN card numbers. Simple encryption of keywords can protect keyword

privacy but it may make it hard for users to perform keyword search.

In recent years, searchable encryption techniques have been developed to allow users to securely search over encrypted data. These techniques typically build an index for each keyword, and

associate that index with the files containing that keyword. Using trapdoors of keywords within the index structure preserve both file content and keyword privacy while allowing data users to conduct keyword searches. However, these existing techniques are overly restrictive because they only allow exact keyword search. Often, users that perform searches have typos in their input strings e.g. typing "deiay" instead of "delay", or the data format may not be the same e.g. "data-warehousing" versus "data warehousing". A spell-check mechanism can be used. However, this does not completely solve the problem since it may require additional user interactions to determine the correct word from a list of candidates suggested by the spell-check algorithm. Besides, the spell check algorithm may not be smart enough to differentiate between two actual valid words. The exploit edit distance to quantify keywords similarity and designed two advanced techniques (wildcard-based and gram-based techniques) to construct storage-efficient fuzzy keyword sets. However, their solution can only provide a single fuzzy keyword search e.g. searching for all data files that contain any keyword with an edit distance of 1 from the word "network". A user that is looking for data files containing the words "ad hoc network" will have to conduct three searches with keywords" ad", "hoc" and "network". Then, he will have to decrypt the meta-descriptions of all the three returned lists and do an intersection to extract the relevant list that contains all three words "ad hoc network".

In this paper, we propose a bed-tree based solution that supports fuzzy multi-keyword search over encrypted cloud data and incremental updates. Given a collection of more number of files, our solution first constructs a list of useful keywords single and multi keywords. We are using co-occurrence probability approach to identify useful multi-keywords for our published data items. More sophisticated technique can be added in the future. Next, we construct all relevant fuzzy keyword sets with the appropriate edit distances that we wish to support. Bloom filters that incorporate all the words in the fuzzy keyword sets of various edit distances are constructed for every keyword. Then, we construct the index tree for the whole data file collection where each leaf node consists of a hash value of a keyword, one or two data vectors that represent the n-gram of that keyword, and several bloom filters one for each edit distance value. Any new information related to additional files can be easily inserted into the existing index tree. Later, we submit both the collection of encrypted data files and the index tree to the cloud server. A user submits a query that consists of the hash value of the keyword, the desired edit distance, and a list of hash values of the words included in the associated fuzzy keyword sets to the cloud server. The cloud server will then search the index tree to retrieve relevant keywords and their associated list of encrypted file identifiers which identify files containing these keywords. The searching time for a multi-keyword query is better than each single keyword associated with that multi-keyword search returns many encrypted file identifiers.

II. RELATED WORK

Plaintext Fuzzy Keyword Search

Plaintext fuzzy keyword search solutions are based on approximate string matching techniques. Some efficient merging algorithms to merge inverted lists of grams generated from stored strings so as to obtain approximately matched strings faster. Various indexing techniques such as metric trees, suffix trees, and q-gram methods are used. These techniques to reduce the storage cost of the gram-based index tree constructed for approximate string matching. To support interactive fuzzy plaintext string matching. However, such approaches cannot be directly used for searchable encryption because they can be easily attacked using dictionary and statistics attacks.

Searchable Encryption

Searchable encryption using traditional cryptography approache. First we constructed of searchable encryption. In this approach, each word in the document is encrypted independently using a special two-layered encryption construction

III. PROBLEM FORMULATION

System Model

We consider a cloud data hosting service with three entities, namely data owner, cloud server and data users as shown in Fig 1. Data owner has a collection of all data files F to be outsourced to a cloud server. All files in F are encrypted and form a collection of encrypted files, C. To provide the searching capability over C, the data owner also builds an encrypted searchable index I from F, and then outsource both the index I and the collection of encrypted files, C to the cloud server. To search the all related document collection for certain keywords, an authorized user constructs corresponding trapdoors for these keywords, T, through some searching control mechanisms. Upon receiving the trapdoors T from data users, the cloud server will search the index I and return the relevant set of encrypted documents.

B. Threat Model

We assume that cloud servers are "honest-but-curious". Specifically, a cloud server will not delete encrypted data files or index from its storage. It will also correctly follow the designated protocol specification. However, it is curious to analyze data in its storage and message flows so as to learn additional information. Based on what information a cloud server knows, any design should address the following two threat models:

• In the Known Cipher text Model, a cloud server is supposed to only know encrypted dataset C and searchable index I.

• In the stronger Known Background Model, a cloud server is assumed to have additional facts on the dataset e.g. the subject and its related statistical information, besides what can be accessed in the known cipher text model. For instance, a cloud server can utilize the document frequency or keyword frequency to infer keywords in the query.

C. Design Goals :

To support fuzzy multi-keyword search for outsourced cloud data using the above system and threat models, our system design should achieve the following security and performance guarantees:

• Fuzzy Multi-keyword search: our search scheme should support fuzzy multi-keyword queries and return a list of encrypted file identifies that contain relevant keywords.

• Privacy preserving: our solution should meet the privacy requirement described in Section III.B and prevent a cloud server from learning additional information from any dataset and its associated index.

• Efficiency: The above goals on functionality and privacy should be accomplished with low communication, storage, and computation overhead.

D. Preliminaries:

Before we explain how we can construct an index tree based on gram-counting order using this technique, first we give a few definitions. Then, we describe the properties that a string order should possess for constructing the gram-counting order index tree used in our design.

Definition 1 (Edit Distance):

Given two strings si and sj, the edit distance between si and sj is defined as the minimum number of primitive operations needed to transform si to sj, denoted by d (si, sj).

Definition 2 (String Order):

Given the string domain Σ * , a string order is a mapping function φ :Σ* →N , mapping each string to an integer value. The mapping function is chosen such that it satisfies the following properties:

Property 1: It takes linear time to verify if φ (si) is larger than φ (sj) for any string pair si and sj.

Property 2: It is lower bounding i.e. it efficiently returns the minimal edit distance between string q and any s1  (si, sj ) . Property 3: Given two string intervals and, the function φ is pair wise lower bounding i.e. it returns the lower bound on the distance between any si  and sj 

Property 4: Given any string interval [si­, sj] the function φ is length bounding i..e. it efficiently returns an upper bound on the length of any string sl  [si­, sj]

Definition 3 (Fuzzy Keyword Search):

Given a collection of N encrypted files C=(F1, F2, … FN) stored in a cloud server, a set of distinct keywords W={w1, w2, .. wp} with predefined edit distance d, and a searching input (f(w),k)(where f(w) is a one way function that provides keyword privacy) and k is the desired edit distance (k<=d), the execution of a fuzzy keyword search returns a set of file identifiers whose corresponding data files possibly containing the word w, denoted as FIDw: if w= u, u ε W, then return {FIDu}; otherwise, ifw€W , then return {FIDu} where ed(w,u) <=k, assuming that k<=d.

Definition 4 (Trapdoors of keywords):

Trapdoors of the keywords are realized by applying a one-way function f as follows: Given a keyword w, and a secret key sk, we compute the trapdoor of w as Tw = f(sk, w). The trapdoors of keywords help us to achieve query and index privacy.

IV.PRIVACY-AWARE BEDTREE BASED CONSTRUCTION

Next, we summarize how an index tree for keywords can be built using the gram counting order. An n-gram is a contiguous sequence of n characters from a string s. Given string s, there exists |s|+n-1 overlapping n-grams.

For example, the string s1="network" for n=2, the n-gram set Q(s1)={#n,ne, et, tw,wo,rk, k#}. An n-gram set can be represented as a vector in a high dimensional space where each dimension corresponds to a distinct n-gram. To compress the information on the vector space, one can use a hash function to map each-n-gram to a set of L buckets (L=4 is shown in Fig 2), and count the number of n-grams in each bucket. Thus, the n-gram set is transformed into a vector of L non-negative integers.

For example, in Fig 2, after hashing the seven 2-grams of the string s1 to four buckets, we get a 4- dimensional vector v1=<2,2,3,1> in the gram-counting space. If vi and vj are the L-dimensional bucket vector representations of si and sj respectively, the edit distance between si and sj is no smaller than

,

Assume that for a given string interval [si,sj], the lower and upper bound values of bucket m are lb[m] and ub[m] (for 1≤m ≤L ). After transforming the query string q to a vector vq in gram counting order, we can apply Eqn(1) with lb[m] and ub[m] to compute some new lower bound on the edit distance from q to any string contained in the interval. That allows us to see if we need to search through that string interval for strings that have an edit distance below a certain threshold d from the query string q. One can construct a bedtree-based index tree with three levels as shown in Fig 3 where each leaf node contains information about the data vector of a word w (denoted as vw,), the word w, and a list of encrypted file identifiers that contain the word w.

We can store the exact keyword in the data structure of an index node within the index tree which facilitates the exact comparison of the query and stored keywords. However, in our application scenario, we cannot store keywords since we need to preserve keyword privacy. Instead, we store hash values of that string. Typically, we only need to store one hash value unless hash collisions with other words occur. When that happens, that keyword will be represented by as many hash values as needed to uniquely identify the string. During the construction of the index tree, if we find that k words map to the same hash value using the first hash function f1, a second (or third) hash function f2 (f3) will be used to hash those k words until they can be uniquely identified. The extra hash values will be stored. Our experience using the two datasets described in Section VI indicates that a 32-bit hash function is sufficient to avoid collisions.

To support fuzzy keyword search, one bloom filter for each edit distance value is constructed for every keyword. This design choice allows us to minimize the collision probability which may cause a fuzzy keyword with a larger edit distance to be found in a bloom filter of another keyword. For example, for the word kw="network", the following words representing all possible keywords having an edit distance of 1 with the word "network":{*network, n*etwork, ne*twork, net*work, netw*ork, netwo*rk, network*k, network*, *etwork, n*twork, ne*work, net*ork, netw*rk, netwo*k, network*} are inserted into a bloom filter, BFkw(1). Then, a data vector, vkw is generated for the word "network". The data vector, vkw , its hash value Hkw and its bloom filter, BFkw(1) and a list of file identifiers that contain this keyword constitute a new leaf node structure that needs to be inserted into the Bedtree index tree. Typical B+ index tree insertion technique as used to insert this leaf node. For example, in Fig 4, we show an example of an existing index tree. The data vector of a new leaf node is determined to fall within the string interval stored in the third intermediate node denoted as I3 and finally inserted in the leaf node level as L3,6.

V. CONSTRUCTIONS OF FUZZY MULTIKEYWORD

SEARCH OVER ENCRYPTRED DATA

Here, we describe how we build our system that supports multi-keyword search over encrypted cloud data. First, the data owner should extract distinct keywords that describe the data items that they wish to publish. For example, if the data items are scientific publications, the keywords can be extracted from the title, the keyword/general term fields, and the abstract of a publication. After extracting distinct keywords set W from the document collection F, the data owner can use domain specific sources to add additional semantic related words to the keyword set e.g. one can use the words for describing the 23 categories from the Microsoft Academic Search domain page have additional keywords for ACM publications. In addition, one can use typical semantic grow bag algorithms to discover semantically related words to enhance the keywords set.

In this work, we merely use (a) co-occurrence probabilities of words present in the core keyword sets to identify relevant double-word keywords e.g. "congestion control", "information retrieval", etc to be added to the keyword set, and (b) the multi-keywords specified in the general and keyword fields of ACM publications are also added. For each keyword wW , we construct its associated fuzzy keyword sets { Γw (i)} with edit distance i=1, …d. For each constructed fuzzy keyword set Γw with

Fig 5: Fuzzy Multi – Keyword Search

edit distance i, a bloom filter is constructed Î’w(i) based on the words in the associated fuzzy keyword set. The hash value of the keyword, a list of encrypted file identifiers that contain this keyword, and a list of bloom filters with appropriate edit distances are then inserted into the index tree. A separate bloom filter is used for each edit distance value because we want to minimize the probability of finding the hash value of a word in the fuzzy keyword set of an irrelevant keyword y with edit distance larger than 1 in a bloom filter of a keyword w that contains the hash values of all words in its fuzzy keyword set that has an edit distance of k (1<k<=d).

A. Basic Search Scheme

Setup: after extracting distinct keywords set W from the document collection F, the data owner first generates a Bloom filter bwi,d that include the fuzzy keyword set Swi,d for each keyword wi with edit distance d. Then, he generates a M-bit hash value, hwi that represents the keyword, produces the data vector based on the gram counting order [12] and inserts a leaf node containing the following information: [hwi, bwi,1, bwi,2, bwi,d, {FIDwi }] into the bed-tree structure described in Section III.D. The data owner also encrypts FIDwi as Enc(sk, FIDwi || wi). All encrypted file identifiers that contain a keyword will be included in a list, and the address pointer to this list is stored as part of the entry in the leaf node of that constructed index tree. Both the index tree and the encrypted file identifiers, encrypted documents are then outsourced to the cloud server.

Fig 5 shows how a user submits a fuzzy multi-keyword search query. A search request consists of the following tuple (vskey , hskey, {BFskey,1, BFskey,2, ..BFskey,d}, d) where skey is the keyword that the user is interested in searching, vskey is the data vector based on gram counting order of that keyword, skey, d is the edit distance that is tolerable to the user and BFskey,k is a set of hash values that represent the fuzzy keyword set with edit distance k from the word skey. Upon receiving the search request, the server uses the data vector of that keyword to search for the presence of such an entry in the index tree. If the query data-vector and the stored data-vector has edit distance that is within the edit distance bound, then the hash values are compared to see if there is any match. If there is a match, the list of stored file identifiers is returned. If there is no exact match, then the server determines if any hash value of the words in the fuzzy keyword set in the query can be found in the bloom filter of any stored leaf node. If there is a match, then the server will retrieve the list of encrypted file identifiers, and include it as part of the response to be sent to the user. Due to the property of Bloom filter, there exists non-zero probability of falsely recognizing unrelated words. The probability of a false positive is

f = (1− (1−1/ m) kn )k ≈ (1− e−kn /m )k where m is the bit length of the bloom filter, k is the overall number of keywords in the fuzzy keyword set and n is the # of hash functions utilized in this bloom filter. To minimize the possibility of returning keywords that are not relevant, one should use two or more hash functions to build bloom filters of different edit distance values for each keyword.

In addition, instead of using only one data vector, one can use two functions to generate two data vectors to further reduce the collision probability. Our experience using the two datasets described in Section VI. A shows that using two data vectors and two hash functions for the bloom filter is sufficient to reduce this probability to a very small number. The search cost associated with our solution is O(|W|) where W is the number of keywords.

Theorem 1 shows that the wildcard-based scheme satisfies both completeness and soundness. Specifically, upon receiving the request w, all of the keywords {wi} will be returned if and only if ed(w, wi) <=k. Similar arguments will work for our basic scheme assuming that the collision probability in the various bloom filters is very small (e.g. 10-3). Below, we give a rough sketch of why we believe our basic search scheme does address the threat model described in Section III.B.

Lemma 1: the intersection of the fuzzy sets Su,k and Sw,k for u and w is not empty if and only if ed (w, u)<=k Proof: First, we show that Su,k ∩ Sw,k is not empty when ed(w,u)<=k. To prove this, it is enough to find an element in Su,k ∩ Sw,k Let u=a1, a2, ..as, w= b1, b2, ..bt where all these ai and bj are single characters. After ed(u,w) edit operations, u can be changed to w according to the definition of edit distance. Let u*= * a1 ,* a2, .. * am , where aj* =aj or aj* = * if any operation is performed at this position. Since the edit distance operation is inverted from w, the same positions containing wildcard at u* will be performed. Because ed(u,w)<=k, u* is included in both Su,k ∩ Sw,k. Next, we prove that Su,k ∩ Sw,k is empty if ed(u,w) > k. The proof is given by reduction Assume that there exists a u* belonging to Su,k ∩ Sw,k We will show that ed(u,w)<=k which shows a contradiction. First, from the assumption that u * Su,k ∩ Sw,k the number of wildcard in u*, n*, is not greater than k. Next, we prove that ed(u,w) <=n*. We can prove the inequality with the induction method. First, we show that it holds when n=1. There are 9 cases that should be considered. If u* is derived from the operation of deletion from both u and w, then ed(u,w) <=1 because the other characters are the same except the character at the same position. The other cases can be analyzed in a similar way and hence omitted. Next, assuming that it holds when n* =then we will show that it also holds when n* = +1 u*= * a1 , * a2 , .. a u am Su,k ∩ Sw,k where aj*=aj or aj*=* , then for a wildcard position t, cancel the underlying operations and revert it to original characters in u and w at this position. Assume two new elements p, q are derived from them respectively. Then, perform one operation at position t of p to make the character the same with that in w, then, p is changed to p* which has only k wildcards. Thus, ed(p*,w) <=Thus, ed(u,w)<= +1. Thus, we get ed(u,w) <=n*. It renders the contradiction that we want because n* <=k. To ensure that the probability of collision in the bloom filter is low, one can use a sufficiently large bloom filter (say with Mbits) with several hash functions. In our experiments, we find that using a 32-bit bloom filter with two hash functions is sufficient.

B. Multi-keyword search

Assume that the published data items are ACM publications and that a user is interested in searching for all publications with the keywords, "information recovery". In a querying user has to submit two queries, one for the word "information" and one for the word "retrieval", decrypt the two returned lists of file identifiers, then, perform an intersection of the two lists before he can construct a list of encrypted file identifiers to retrieve the relevant encrypted files from the cloud server. However, in our approach, that user can just submit a trapdoor representing the hashed value of the words "information recovery" together with the hash values of words in its associated fuzzy keyword set, and the desired edit distance value to the cloud server if he wishes to use the fuzzy keyword search feature. The cloud server will search the index tree according to the procedures described in Alg1 & Alg2 of Fig 6 to retrieve a list of relevant encrypted file identifiers. Additional pseudo codes for constructing storage and search tokens are included in Appendix 1. To provide efficient search, we further employ some optimizations to prune the number of intermediate and leaf nodes that we need to examine during the process of finding relevant leaf nodes to construct a list of relevant file identifiers as part of a query response. For example, in Fig 7, with optimization, one only needs to traverse intermediate nodes I1, I3, I5, and examine some leaf nodes, but skip the intermediate nodes I2 and I4 and all leaf nodes under these two intermediate nodes.

A. Performance of Index Tree Construction

VI. PERFORMANCE EVALUATION

Fig 8: Index Construction Time for our Enhanced BedTree and Wildcard Technique approaches with edit distance d=1

We conduct a thorough experimental evaluation of our proposed techniques on real data sets The data set consists of about 2500 publications. For each data set, we extract the words in the paper titles to construct the core keyword set in our experiment. In addition, we compute the co-occurrence probabilities of two keywords using the words in the constructed core keyword set to identify extra multi-keywords that can be associated with each title Using this approach, we can find useful keywords like "power control", "congestion control" etc. The total number of unique keywords for the Infocom dataset is 3550 and their average word length is 7.41. The total number of unique keywords in the ACM dataset is 12386 and the average keyword length is 12.7 (the total length of a multi-keyword is used as the keyword length for computing this average). Our experiment is conducted on a linux machine with an Intel Core 2 processor running at 1.86GHz and 4G DDR2-800 memory. The performance of our scheme is evaluated regarding the time and storage cost of index construction, the search time for 10 queries. We compare our solution with the two approaches described using the construction time, the storage cost and the searching time as our performance metrics. Since the scheme does not support multi-keyword directly, we report the total time taken to decrypt the two lists of encrypted file identifiers returned using the two single-word queries.

We first investigate how efficient can the index be constructed using our approach. We use SHA-1 as our hash function with an output length of l=160 bits, and take the first 32 bits as the hash value of any keyword. We use different hash functions for data vector and bloom filter generations. Fig 8 shows the index tree construction time (measured in terms of ms) one can see that our index tree construction time is smaller than the two approaches described as the number of keywords exceeds 2200. In addition, our storage cost is significantly reduced compared to the symbol-based trie-traversed based approach and slightly better than the listing based approach. Both the construction time and the storage cost increase quite linearly as the number of publication titles increases. Similar plots for the ACM publications dataset are included in Appendix 2. The results for the second dataset equally show that our solution is more efficient.

Performance of Fuzzy Multi-keyword Search

Next, we investigate the effectiveness of different searches, and their searching times. Our search time is larger (but still small) with single fuzzy keyword search but the search time for our approach tends to be better for fuzzy multi-keyword queries where many encrypted file identifiers for each word are returned using the single-keyword approach. For example, with the query "Delay ", our search time is 0.6ms while the trie-traverse approach is 1.2 ms. For the query "power ", our search time is 0.3ms while the trie-traverse approach is 0.66ms.

VII. CONCLUSION AND FUTURE WORK

In this paper, for the first time we formalize and solve the problem of supporting efficient yet privacy preserving fuzzy multi-keyword search over encrypted data items. Our approach allows easy way to insertion of newly published data items without having to reconstruct the whole index tree when new information is available. We also use the co-occurrence probabilities to determine additional useful multi-keywords that can be associated with the published encrypted data items. This feature allows us to have nearer response times for file retrievals with multi-keyword queries.

Our method also allows fuzzy multi-keyword search. Our evaluations using two datasets show that our planned solution has better assembly time and much smaller storage cost as the size of the data files collection increases. In addition, the search time using our approach is reasonable and tends to be better for multi-keyword search where the returned list for individual keyword is large. In the near future, we hope to use more sophisticated techniques, the Topic link LDA approach to find semantically related words e.g. "monsters" and "mythological beings" that can be used as additional meta-data descriptions of encrypted data files for fuzzy multi-keyword searches. Through exact security analysis, we show that our planned solution is secure and privacy- aware Bedtree based technique, while correctly realizing the purpose of fuzzy multi -keyword search. Extensive experimental results demonstrate the efficiency of our solution.

As our ongoing work, we will continue to research on security mechanisms that support Search semantics that takes into consideration conjunction of keywords, sequence of keywords, and even the complex natural language semantics to produce highly relevant search results. Search ranking that sorts the searching results according to the relevance criteria.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now