Storage Services In Cloud Computing

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Storage Services in Cloud Computing

Cong Wang,Student Member, IEEE, Qian Wang, Student Member, IEEE,

Kui Ren,Senior Member, IEEE, Ning Cao, and Wenjing Lou, Senior Member, IEEE

Abstract—Cloud storage enables users to remotely store their data and enjoy the on-demand high quality cloud applications without

the burden of local hardware and software management. Though the benefits are clear, such a service is also relinquishing users’

physical possession of their outsourced data, which inevitably poses new security risks toward the correctness of the data in cloud. In

order to address this new problem and further achieve a secure and dependable cloud storage service, we propose in this paper a

flexible distributed storage integrity auditing mechanism, utilizing the homomorphic token and distributed erasure-coded data. The

proposed design allows users to audit the cloud storage with very lightweight communication and computation cost. The auditing result

not only ensures strong cloud storage correctness guarantee, but also simultaneously achieves fast data error localization, i.e., the

identification of misbehaving server. Considering the cloud data are dynamic in nature, the proposed design further supports secure

and efficient dynamic operations on outsourced data, including block modification, deletion, and append. Analysis shows the proposed

scheme is highly efficient and resilient against Byzantine failure, malicious data modification attack, and even server colluding attacks.

Index Terms—Data integrity, dependable distributed storage, error localization, data dynamics, cloud computing.

Ç

1INTRODUCTION

S

EVERALtrends are opening up the era of cloud comput-ing, which is an Internet-based development and use of

computer technology. The ever cheaper and more powerful

processors, together with the Software as a Service (SaaS)

computing architecture, are transforming data centers into

pools of computing service on a huge scale. The increasing

network bandwidth and reliable yet flexible network

connections make it even possible that users can now

subscribe high quality services from data and software that

reside solely on remote data centers.

Moving data into the cloud offers great convenience to

users since they don’t have to care about the complexities

of direct hardware management. The pioneer of cloud

computing vendors, Amazon Simple Storage Service (S3),

and Amazon Elastic Compute Cloud (EC2) [2] are both

well-known examples. While these internet-based online

services do provide huge amounts of storage space

and customizable computing resources, this computing

platform shift, however, is eliminating the responsibility of

local machines for data maintenance at the same time. As a

result, users are at the mercy of their cloud service

providers (CSP) for the availability and integrity of their

data [3], [4]. On the one hand, although the cloud

infrastructures are much more powerful and reliable than

personal computing devices, broad range of both internal

and external threats for data integrity still exist. Examples

of outages and data loss incidents of noteworthy cloud

storage services appear from time to time [5], [6], [7], [8],

[9]. On the other hand, since users may not retain a local

copy of outsourced data, there exist various incentives for

CSP to behave unfaithfully toward the cloud users

regarding the status of their outsourced data. For example,

to increase the profit margin by reducing cost, it is possible

for CSP to discard rarely accessed data without being

detected in a timely fashion [10]. Similarly, CSP may even

attempt to hide data loss incidents so as to maintain a

reputation [11], [12], [13]. Therefore, although outsourcing

data into the cloud is economically attractive for the cost

and complexity of long-term large-scale data storage, its

lacking of offering strong assurance of data integrity and

availability may impede its wide adoption by both

enterprise and individual cloud users.

In order to achieve the assurances of cloud data integrity

and availability and enforce the quality of cloud storage

service, efficient methods that enable on-demand data

correctness verification on behalf of cloud users have to

be designed. However, the fact that users no longer have

physical possession of data in the cloud prohibits the direct

adoption of traditional cryptographic primitives for the

purpose of data integrity protection. Hence, the verification

of cloud storage correctness must be conducted without

explicit knowledge of the whole data files [10], [11], [12],

[13]. Meanwhile, cloud storage is not just a third party data

warehouse. The data stored in the cloud may not only be

220 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 2, APRIL-JUNE 2012

. C. Wang is with the Department of Electrical and Computer Engineering,

Illinois Institute of Technology, 1451 East 55th St., Apt. 1017 N, Chicago,

IL 60616. E-mail: [email protected].

. Q. Wang is with the Department of Electrical and Computer Engineering,

Illinois Institute of Technology, 500 East 33rd St., Apt. 602, Chicago, IL

60616. E-mail: [email protected].

. K. Ren is with the Department of Electrical and Computer Engineering,

Illinois Institute of Technology, 3301 Dearborn St., Siegel Hall 319,

Chicago, IL 60616. E-mail: [email protected].

. N. Cao is with the Department of Electrical and Computer Engineering,

Worcester Polytechnic Institute, 100 Institute Road, Worcester, MA

01609. E-mail: [email protected].

. W. Lou is with the Department of Computer Science, Virginia Polytechnic

Institute and State University, Falls Church, VA 22043.

E-mail: [email protected].

Manuscript received 4 Apr. 2010; revised 14 Sept. 2010; accepted 25 Dec.

2010; published online 6 May 2011.

For information on obtaining reprints of this article, please send e-mail to:

[email protected] and reference IEEECS Log Number TSCSI-2010-04-0033.

Digital Object Identifier no. 10.1109/TSC.2011.24.

1939-1374/12/$31.002012 IEEE Published by the IEEE Computer Society

accessed but also be frequently updated by the users [14],

[15], [16], including insertion, deletion, modification, ap-pending, etc. Thus, it is also imperative to support the

integration of this dynamic feature into the cloud storage

correctness assurance, which makes the system design even

more challenging. Last but not the least, the deployment of

cloud computing is powered by data centers running in a

simultaneous, cooperated, and distributed manner [3]. It is

more advantages for individual users to store their data

redundantly across multiple physical servers so as to

reduce the data integrity and availability threats. Thus,

distributed protocols for storage correctness assurance will

be of most importance in achieving robust and secure cloud

storage systems. However, such important area remains to

be fully explored in the literature.

Recently, the importance of ensuring the remote data

integrity has been highlighted by the following research

works under different system and security models [10], [11],

[12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22]. These

techniques, while can be useful to ensure the storage

correctness without having users possessing local data, are

all focusing on single server scenario. They may be useful for

quality-of-service testing [23], but does not guarantee the

data availability in case of server failures. Although direct

applying these techniques to distributed storage (multiple

servers) could be straightforward, the resulted storage

verification overhead would be linear to the number of

servers. As an complementary approach, researchers have

also proposed distributed protocols [23], [24], [25] for

ensuring storage correctness across multiple servers or peers.

However, while providing efficient cross server storage

verification and data availability insurance, these schemes

are all focusing on static or archival data. As a result, their

capabilities of handling dynamic data remains unclear,

which inevitably limits their full applicability in cloud

storage scenarios.

In this paper, we propose an effective and flexible

distributed storage verification scheme with explicit dy-namic data support to ensure the correctness and avail-ability of users’ data in the cloud. We rely on erasure-correcting code in the file distribution preparation to

provide redundancies and guarantee the data dependability

against Byzantine servers [26], where a storage server may

fail in arbitrary ways. This construction drastically reduces

the communication and storage overhead as compared to

the traditional replication-based file distribution techniques.

By utilizing the homomorphic token with distributed

verification of erasure-coded data, our scheme achieves

the storage correctness insurance as well as data error

localization: whenever data corruption has been detected

during the storage correctness verification, our scheme can

almost guarantee the simultaneous localization of data

errors, i.e., the identification of the misbehaving server(s). In

order to strike a good balance between error resilience and

data dynamics, we further explore the algebraic property of

our token computation and erasure-coded data, and

demonstrate how to efficiently support dynamic operation

on data blocks, while maintaining the same level of storage

correctness assurance. In order to save the time, computa-tion resources, and even the related online burden of users,

we also provide the extension of the proposed main scheme

to support third-party auditing, where users can safely

delegate the integrity checking tasks to third-party auditors

(TPA) and be worry-free to use the cloud storage services.

Our work is among the first few ones in this field to

consider distributed data storage security in cloud comput-ing. Our contribution can be summarized as the following

three aspects: 1) Compared to many of its predecessors,

which only provide binary results about the storage status

across the distributed servers, the proposed scheme

achieves the integration of storage correctness insurance

and data error localization, i.e., the identification of

misbehaving server(s). 2) Unlike most prior works for

ensuring remote data integrity, the new scheme further

supports secure and efficient dynamic operations on data

blocks, including: update, delete, and append. 3) The

experiment results demonstrate the proposed scheme is

highly efficient. Extensive security analysis shows our

scheme is resilient against Byzantine failure, malicious data

modification attack, and even server colluding attacks.

The rest of the paper is organized as follows: Section 2

introduces the system model, adversary model, our design

goal, and notations. Then we provide the detailed

description of our scheme in Sections 3 and 4. Section 5

gives the security analysis and performance evaluations,

followed by Section 6 which overviews the related work.

Finally, Section 7 concludes the whole paper.

2PROBLEMSTATEMENT

2.1 System Model

A representative network architecture for cloud storage

service architecture is illustrated in Fig. 1. Three different

network entities can be identified as follows:

. User: an entity, who has data to be stored in the

cloud and relies on the cloud for data storage and

computation, can be either enterprise or individual

customers.

. Cloud Server (CS): an entity, which is managed by

cloud service provider (CSP) to provide data storage

service and has significant storage space and

computation resources (we will not differentiate CS

and CSP hereafter).

. Third-Party Auditor: an optional TPA, who has

expertise and capabilities that users may not have, is

trusted to assess and expose risk of cloud storage

services on behalf of the users upon request.

In cloud data storage, a user stores his data through a

CSP into a set of cloud servers, which are running in a

WANG ET AL.: TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING 221

Fig. 1. Cloud storage service architecture.

simultaneous, cooperated, and distributed manner. Data

redundancy can be employed with a technique of erasure-correcting code to further tolerate faults or server crash as

user’s data grow in size and importance. Thereafter, for

application purposes, the user interacts with the cloud

servers via CSP to access or retrieve his data. In some cases,

the user may need to perform block level operations on his

data. The most general forms of these operations we are

considering are block update, delete, insert, and append.

Note that in this paper, we put more focus on the support of

file-oriented cloud applications other than nonfile applica-tion data, such as social networking data. In other words,

the cloud data we are considering is not expected to be

rapidly changing in a relative short period.

As users no longer possess their data locally, it is of

critical importance to ensure users that their data are being

correctly stored and maintained. That is, users should be

equipped with security means so that they can make

continuous correctness assurance (to enforce cloud storage

service-level agreement) of their stored data even without

the existence of local copies. In case that users do not

necessarily have the time, feasibility or resources to monitor

their data online, they can delegate the data auditing tasks

to an optional trusted TPA of their respective choices.

However, to securely introduce such a TPA, any possible

leakage of user’s outsourced data toward TPA through the

auditing protocol should be prohibited.

In our model, we assume that the point-to-point

communication channels between each cloud server and

the user is authenticated and reliable, which can be

achieved in practice with little overhead. These authentica-tion handshakes are omitted in the following presentation.

2.2 Adversary Model

From user’s perspective, the adversary model has to capture

all kinds of threats toward his cloud data integrity. Because

cloud data do not reside at user’s local site but at CSP’s

address domain, these threats can come from two different

sources: internal and external attacks. For internal attacks, a

CSP can be self-interested, untrusted, and possibly mal-icious. Not only does it desire to move data that has not

been or is rarely accessed to a lower tier of storage than

agreed for monetary reasons, but it may also attempt to hide

a data loss incident due to management errors, Byzantine

failures, and so on. For external attacks, data integrity

threats may come from outsiders who are beyond the

control domain of CSP, for example, the economically

motivated attackers. They may compromise a number of

cloud data storage servers in different time intervals and

subsequently be able to modify or delete users’ data while

remaining undetected by CSP.

Therefore, we consider the adversary in our model has

the following capabilities, which captures both external and

internal threats toward the cloud data integrity. Specifically,

the adversary is interested in continuously corrupting the

user’s data files stored on individual servers. Once a server

is comprised, an adversary can pollute the original data files

by modifying or introducing its own fraudulent data to

prevent the original data from being retrieved by the user.

This corresponds to the threats from external attacks. In the

worst case scenario, the adversary can compromise all the

storage servers so that he can intentionally modify the data

files as long as they are internally consistent. In fact, this is

equivalent to internal attack case where all servers are

assumed colluding together from the early stages of

application or service deployment to hide a data loss or

corruption incident.

2.3 Design Goals

To ensure the security and dependability for cloud data

storage under the aforementioned adversary model, we aim

to design efficient mechanisms for dynamic data verifica-tion and operation and achieve the following goals:

1. Storage correctness: to ensure users that their data

are indeed stored appropriately and kept intact all

the time in the cloud.

2. Fast localization of data error: to effectively locate

the malfunctioning server when data corruption has

been detected.

3. Dynamic data support: to maintain the same level of

storage correctness assurance even if users modify,

delete, or append their data files in the cloud.

4. Dependability: to enhance data availability against

Byzantine failures, malicious data modification and

server colluding attacks, i.e., minimizing the effect

brought by data errors or server failures.

5. Lightweight: to enable users to perform storage

correctness checks with minimum overhead.

2.4 Notation and Preliminaries

. F—the data file to be stored. We assume thatFcan

be denoted as a matrix ofmequal-sized data vectors,

each consisting ofl blocks. Data blocks are all well

represented as elements in Galois Field GFð2

p

Þ for

p¼8or 16.

. A—The dispersal matrix used for Reed-Solomon

coding.

. G—The encoded file matrix, which includes a set of

n¼mþkvectors, each consisting ofl blocks.

. fkeyðÞ—pseudorandom function (PRF), which is

defined asf: f0;1g

key!GFð2

p

Þ.

. keyðÞ—pseudorandom permutation (PRP), which

is defined as : f0;1g

log

2

ð‘Þ

key!f0;1g

log

2

ð‘Þ

.

. ver—a version number bound with the index for

individual blocks, which records the times the block

has been modified. Initially we assumeveris 0 for all

data blocks.

. s

ver

ij

—the seed for PRF, which depends on the file

name, block indexi, the server position jas well as

the optional block version number ver.

3ENSURINGCLOUDDATASTORAGE

In cloud data storage system, users store their data in the

cloud and no longer possess the data locally. Thus, the

correctness and availability of the data files being stored on

the distributed cloud servers must be guaranteed. One of

the key issues is to effectively detect any unauthorized data

modification and corruption, possibly due to server

compromise and/or random Byzantine failures. Besides,

in the distributed case when such inconsistencies are

222 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 2, APRIL-JUNE 2012

successfully detected, to find which server the data error

lies in is also of great significance, since it can always be the

first step to fast recover the storage errors and/or

identifying potential threats of external attacks.

To address these problems, our main scheme for ensuring

cloud data storage is presented in this section. The first part

of the section is devoted to a review of basic tools from

coding theory that is needed in our scheme for file

distribution across cloud servers. Then, the homomorphic

token is introduced. The token computation function we are

considering belongs to a family of universal hash function

[27], chosen to preserve the homomorphic properties, which

can be perfectly integrated with the verification of erasure-coded data [24], [28]. Subsequently, it is shown how to

derive a challenge-response protocol for verifying the

storage correctness as well as identifying misbehaving

servers. The procedure for file retrieval and error recovery

based on erasure-correcting code is also outlined. Finally,

we describe how to extend our scheme to third party

auditing with only slight modification of the main design.

3.1 File Distribution Preparation

It is well known that erasure-correcting code may be used to

tolerate multiple failures in distributed storage systems. In

cloud data storage, we rely on this technique to disperse the

data fileFredundantly across a set ofn¼mþkdistributed

servers. Anðm; kÞReed-Solomon erasure-correcting code is

used to createkredundancy parity vectors from mdata

vectors in such a way that the originalmdata vectors can be

reconstructed from anymout of themþkdata and parity

vectors. By placing each of themþkvectors on a different

server, the original data file can survive the failure of any

kof themþkservers without any data loss, with a space

overhead ofk=m. For support of efficient sequential I/O to

the original file, our file layout is systematic, i.e., the

unmodified mdata file vectors together with k parity

vectors is distributed acrossmþkdifferent servers.

Let F¼ðF1;F2;...;FmÞ and Fi ¼ðf1i;f2i

;...;fliÞ

T

ði2

f1;...;mgÞ. Here, T(shorthand for transpose) denotes that

eachFi is represented as a column vector, and l denotes

data vector size in blocks. All these blocks are elements of

GFð2

p

Þ. The systematic layout with parity vectors is

achieved with the information dispersal matrixA, derived

from anmðmþkÞVandermonde matrix [29]

1 1 ... 1 1 ... 1

1 2 ... m mþ1 ... n

. . .

.

.

. . .

.

.

.

m1

1

m1

2

...

m1

m

m1

mþ1

...

m1

n

0

B

@

1

C

A

;

wherej ðj2f1;...;ngÞ are distinct elements randomly

picked fromGFð2

p

Þ.

After a sequence of elementary row transformations, the

desired matrixAcan be written as

A¼ðIjPÞ¼

10... 0p11 p12 ... p1k

01... 0p21 p22 ... p2k

. . .

.

.

. . . .

.

.

.

00... 1pm1 pm2 ... pmk

0

B

@

1

C

A

;

whereIis ammidentity matrix andPis the secret parity

generation matrix with sizemk. Note that Ais derived

from a Vandermonde matrix, thus it has the property that

anymout of themþkcolumns form an invertible matrix.

By multiplyingFbyA, the user obtains the encoded file

G¼FA¼ðG

ð1Þ

;G

ð2Þ

;...;G

ðmÞ

;G

ðmþ1Þ

;...;G

ðnÞ

Þ

¼ðF1;F2;...;Fm;G

ðmþ1Þ

;...;G

ðnÞ

Þ;

whereGðjÞ

¼ðg

ðjÞ

1

;g

ðjÞ

2

;...;g

ðjÞ

l

Þ

T

ðj2f1;...;ngÞ. As noticed,

the multiplication reproduces the original data file vectors

ofFand the remaining partðGðmþ1Þ

;...;G

ðnÞ

Þarekparity

vectors generated based onF.

3.2 Challenge Token Precomputation

In order to achieve assurance of data storage correctness

and data error localization simultaneously, our scheme

entirely relies on the precomputed verification tokens. The

main idea is as follows: before file distribution the user

precomputes a certain number of short verification tokens

on individual vectorGðjÞ

ðj2f1;...;ngÞ, each token cover-ing a random subset of data blocks. Later, when the user

wants to make sure the storage correctness for the data in

the cloud, he challenges the cloud servers with a set of

randomly generated block indices. Upon receiving chal-lenge, each cloud server computes a short "signature" over

the specified blocks and returns them to the user. The

values of these signatures should match the corresponding

tokens precomputed by the user. Meanwhile, as all servers

operate over the same subset of the indices, the requested

response values for integrity check must also be a valid

codeword determined by the secret matrixP.

Suppose the user wants to challenge the cloud serverst

times to ensure the correctness of data storage. Then, he must

precomputetverification tokens for eachGðjÞ

ðj2f1;...;ngÞ,

using a PRFfðÞ, a PRP ðÞ, a challenge key kchal, and a

master permutation keyKPRP. Specifically, to generate the

ith token for serverj, the user acts as follows:

1. Derive a random challenge valuei ofGFð2

p

Þ by

i ¼fkchal

ðiÞ and a permutation key k

ðiÞ

prp based on

KPRP.

2. Compute the set ofrrandomly-chosen indices

fIq 2½1;...;lj1qrg; whereIq ¼

k

ðiÞ

prp

ðqÞ:

3. Calculate the token as

v

ðjÞ

i ¼

Xr

q¼1

q

i

G

ðjÞ

½Iq; whereG

ðjÞ

½Iq¼g

ðjÞ

Iq

:

Note thatv

ðjÞ

i

, which is an element of GFð2

p

Þwith small

size, is the response the user expects to receive from serverj

when he challenges it on the specified data blocks.

After token generation, the user has the choice of either

keeping the precomputed tokens locally or storing them in

encrypted form on the cloud servers. In our case here, the

user stores them locally to obviate the need for encryption

and lower the bandwidth overhead during dynamic data

operation which will be discussed shortly. The details of

token generation are shown in Algorithm 1.

WANG ET AL.: TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING 223

Algorithm 1.Token Precomputation.

1: procedure

2: Choose parametersl; n and functionf; ;

3: Choose the numbertof tokens;

4: Choose the numberrof indices per verification;

5: Generate master keyKPRPand challenge keykchal

;

6: forvectorGðjÞ

;j 1;ndo

7: forroundi 1;tdo

8: Derivei ¼fkchal

ðiÞandk

ðiÞ

prp from KPRP.

9: Computev

ðjÞ

i ¼

Pr

q¼1

q

i

GðjÞ

½

k

ðiÞ

prp

ðqÞ

10: end for

11: end for

12: Store all thevi

’s locally.

13: end procedure

Once all tokens are computed, the final step before file

distribution is to blind each parity blockg

ðjÞ

i in ðGðmþ1Þ

;...;

GðnÞ

Þby

g

ðjÞ

i g

ðjÞ

i þfkj

ðsijÞ;i2f1;...;lg;

wherekj is the secret key for parity vector GðjÞ

ðj2fmþ

1;...;ngÞ. This is for protection of the secret matrix P.We

will discuss the necessity of using blinded parities in detail

in Section 5.2. After blinding the parity information, the

user disperses all thenencoded vectorsGðjÞ

ðj2f1;...;ngÞ

across the cloud serversS1;S2;...;Sn.

3.3 Correctness Verification and Error Localization

Error localization is a key prerequisite for eliminating errors

in storage systems. It is also of critical importance to

identify potential threats from external attacks. However,

many previous schemes [23], [24] do not explicitly consider

the problem of data error localization, thus only providing

binary results for the storage verification. Our scheme

outperforms those by integrating the correctness verifica-tion and error localization (misbehaving server identifica-tion) in our challenge-response protocol: the response

values from servers for each challenge not only determine

the correctness of the distributed storage, but also contain

information to locate potential data error(s).

Specifically, the procedure of theith challenge-response

for a cross-check over thenservers is described as follows:

1. The user reveals thei as well as theith permutation

keyk

ðiÞ

prp

to each servers.

2. The server storing vectorGðjÞ

ðj2f1;...;ngÞaggre-gates thoserrows specified by indexk

ðiÞ

prp

into a linear

combination

R

ðjÞ

i ¼

Xr

q¼1

q

i

G

ðjÞ

½

k

ðiÞ

prp

ðqÞ;

and send backR

ðjÞ

i

ðj2f1;...;ngÞ.

3. Upon receivingR

ðjÞ

i

’s from all the servers, the user

takes away blind values inRðjÞ

ðj2fmþ1;...;ngÞby

R

ðjÞ

i R

ðjÞ

i

Xr

q¼1

fkj

ðsIq;j

Þ

q

i

; whereIq ¼

k

ðiÞ

prp

ðqÞ:

4. Then, the user verifies whether the received values

remain a valid codeword determined by the secret

matrixP

R

ð1Þ

i

;...;R

ðmÞ

i

P¼

?

R

ðmþ1Þ

i

;...;R

ðnÞ

i

:

Because all the servers operate over the same subset of

indices, the linear aggregation of these r specified rows

ðR

ð1Þ

i

;...;R

ðnÞ

i

Þ has to be a codeword in the encoded file

matrix (See Section 5.1 for the correctness analysis). If the

above equation holds, the challenge is passed. Otherwise, it

indicates that among those specified rows, there exist file

block corruptions.

Once the inconsistency among the storage has been

successfully detected, we can rely on the precomputed

verification tokens to further determine where the potential

data error(s) lies in. Note that each response R

ðjÞ

i

is

computed exactly in the same way as tokenv

ðjÞ

i

, thus the

user can simply find which server is misbehaving by

verifying the followingnequations:

R

ðjÞ

i ¼

?

v

ðjÞ

i

;j2f1;...;ng:

Algorithm 2 gives the details of correctness verification and

error localization.

Algorithm 2.Correctness Verification and Error Localization.

1: procedureCHALLENGE(i)

2: Recomputei ¼fkchal

ðiÞandk

ðiÞ

prp from KPRP;

3: Sendfi;k

ðiÞ

prp

gto all the cloud servers;

4: Receive from servers:

fR

ðjÞ

i ¼

Pr

q¼1

q

i

GðjÞ

½

k

ðiÞ

prp

ðqÞj1jng

5: forðj mþ1;nÞdo

6: RðjÞ RðjÞ

Pr

q¼1

fkj

ðsIq;j

Þ

q

i

, Iq ¼

k

ðiÞ

prp

ðqÞ

7: end for

8: if ððR

ð1Þ

i

;...;R

ðmÞ

i

ÞP¼¼ðR

ðmþ1Þ

i

;...;R

ðnÞ

i

ÞÞthan

9: Accept and ready for the next challenge.

10: else

11: for(j 1;n)do

12: if ðR

ðjÞ

i

! ¼v

ðjÞ

i

Þthan

13: returnserverjis misbehaving.

14: end if

15: end for

16: end if

17: end procedure

Discussion.Previous work [23], [24] has suggested using

the decoding capability of error-correction code to treat data

errors. But such approach imposes a bound on the number

of misbehaving serversbbybbk=2c. Namely, they cannot

identify misbehaving servers when b>bk=2c.

1

However,

our token-based approach, while allowing efficient storage

correctness validation, does not have this limitation on the

number of misbehaving servers. That is, our approach can

identify any number of misbehaving servers for b

ðmþkÞ. Also note that, for every challenge, each server

only needs to send back an aggregated value over the

224 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 2, APRIL-JUNE 2012

1. In [23], the authors also suggest using brute-force decoding when their

dispersal code is an erasure code. However, such brute-force method is

asymptotically inefficient, and still cannot guarantee identification of all

misbehaving servers.

specified set of blocks. Thus, the bandwidth cost of our

approach is much less than the straightforward approaches

that require downloading all the challenged data.

3.4 File Retrieval and Error Recovery

Since our layout of file matrix is systematic, the user can

reconstruct the original file by downloading the data

vectors from the firstmservers, assuming that they return

the correct response values. Notice that our verification

scheme is based on random spot-checking, so the storage

correctness assurance is a probabilistic one. However, by

choosing system parametersðe:g:; r; l; tÞ appropriately and

conducting enough times of verification, we can guarantee

the successful file retrieval with high probability. On the

other hand, whenever the data corruption is detected, the

comparison of precomputed tokens and received response

values can guarantee the identification of misbehaving

server(s) (again with high probability), which will be

discussed shortly. Therefore, the user can always ask

servers to send back blocks of therrows specified in the

challenge and regenerate the correct blocks by erasure

correction, shown in Algorithm 3, as long as the number of

identified misbehaving servers is less than k. (otherwise,

there is no way to recover the corrupted blocks due to lack

of redundancy, even if we know the position of misbehav-ing servers.) The newly recovered blocks can then be

redistributed to the misbehaving servers to maintain the

correctness of storage.

Algorithm 3.Error Recovery.

1: procedure

%Assume the block corruptions have been detected

among

%the specified rrows;

%Assumeskservers have been identified

misbehaving

2: Downloadrrows of blocks from servers;

3: Treatsservers as erasures and recover the blocks.

4: Resend the recovered blocks to corresponding

servers.

5: end procedure

3.5 Toward Third Party Auditing

As discussed in our architecture, in case the user does not

have the time, feasibility, or resources to perform the

storage correctness verification, he can optionally delegate

this task to an independent third-party auditor, making the

cloud storage publicly verifiable. However, as pointed out

by the recent work [30], [31], to securely introduce an

effective TPA, the auditing process should bring in no new

vulnerabilities toward user data privacy. Namely, TPA

should not learn user’s data content through the delegated

data auditing. Now we show that with only slight

modification, our protocol can support privacy-preserving

third party auditing.

The new design is based on the observation of linear

property of the parity vector blinding process. Recall that

the reason of blinding process is for protection of the secret

matrixPagainst cloud servers. However, this can be

achieved either by blinding the parity vector or by blinding

the data vector (we assume k<m). Thus, if we blind data

vector before file distribution encoding, then the storage

verification task can be successfully delegated to third party

auditing in a privacy-preserving manner. Specifically, the

new protocol is described as follows:

1. Before file distribution, the user blinds each file

block data g

ðjÞ

i in ðGð1Þ

;...;G

ðmÞ

Þ by g

ðjÞ

i g

ðjÞ

i þ

fkj

ðsijÞ;i2f1;...;lg, where kj is the secret key for

data vectorGðjÞ

ðj2f1;...;mgÞ.

2. Based on the blinded data vectorðGð1Þ

;...;G

ðmÞ

Þ, the

user generateskparity vectorsðGðmþ1Þ

;...;G

ðnÞ

Þvia

the secret matrix P.

3. The user calculates theith token for server j as

previous scheme: v

ðjÞ

i ¼

Pr

q¼1

q

i

GðjÞ

½Iq,where

GðjÞ

½Iq¼g

ðjÞ

Iq

andi ¼fkchal

ðiÞ2GFð2

p

Þ.

4. The user sends the token setfv

ðjÞ

i

g

f1it;1jng

, secret

matrixP, permutation and challenge key KPRP, and

kchal

to TPA for auditing delegation.

The correctness validation and misbehaving server iden-tification for TPA is just similar to the previous scheme. The

only difference is that TPA does not have to take away the

blinding values in the servers’ responseRðjÞ

ðj2f1;...;ngÞ

but verifies directly. As TPA does not know the secret

blinding keykjðj2f1;...;mgÞ, there is no way for TPA to

learn the data content information during auditing process.

Therefore, the privacy-preserving third party auditing is

achieved. Note that compared to previous scheme, we only

change the sequence of file encoding, token precomputation,

and blinding. Thus, the overall computation overhead and

communication overhead remains roughly the same.

4PROVIDINGDYNAMICDATAOPERATION

SUPPORT

So far, we assumed thatFrepresents static or archived data.

This model may fit some application scenarios, such as

libraries and scientific data sets. However, in cloud data

storage, there are many potential scenarios where data

stored in the cloud is dynamic, like electronic documents,

photos, or log files, etc. Therefore, it is crucial to consider

the dynamic case, where a user may wish to perform

various block-level operations of update, delete, and

append to modify the data file while maintaining the

storage correctness assurance.

Since data do not reside at users’ local site but at cloud

service provider’s address domain, supporting dynamic

data operation can be quite challenging. On the one hand,

CSP needs to process the data dynamics request without

knowing the secret keying material. On the other hand,

users need to ensure that all the dynamic data operation

request has been faithfully processed by CSP. To address

this problem, we briefly explain our approach methodology

here and provide the details later. For any data dynamic

operation, the user must first generate the corresponding

resulted file blocks and parities. This part of operation has

to be carried out by the user, since only he knows the secret

matrixP. Besides, to ensure the changes of data blocks

correctly reflected in the cloud address domain, the user

also needs to modify the corresponding storage verification

tokens to accommodate the changes on data blocks. Only

with the accordingly changed storage verification tokens,

the previously discussed challenge-response protocol can

be carried on successfully even after data dynamics. In

WANG ET AL.: TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING 225

other words, these verification tokens help to ensure that

CSP would correctly execute the processing of any dynamic

data operation request. Otherwise, CSP would be caught

cheating with high probability in the protocol execution

later on. Given this design methodology, the straightfor-ward and trivial way to support these operations is for user

to download all the data from the cloud servers and

recompute the whole parity blocks as well as verification

tokens. This would clearly be highly inefficient. In this

section, we will show how our scheme can explicitly and

efficiently handle dynamic data operations for cloud data

storage, by utilizing the linear property of Reed-Solomon

code and verification token construction.

4.1 Update Operation

In cloud data storage, a user may need to modify some data

block(s) stored in the cloud, from its current valuefij to a

new one,fij þfij

. We refer this operation as data update.

Fig. 2 gives the high level logical representation of data

block update. Due to the linear property of Reed-Solomon

code, a user can perform the update operation and generate

the updated parity blocks by using fij only, without

involving any other unchanged blocks. Specifically, the user

can construct a general update matrixFas

F¼

f11 f12 ... f1m

f21 f22 ... f2m

. . .

.

.

.

fl1 fl2 ... flm

0

B

@

1

C

A

¼ðF1;F2;...;FmÞ:

Note that we use zero elements inFto denote the

unchanged blocks and thusFshould only be a sparse

matrix most of the time (we assume for certain time epoch,

the user only updates a relatively small part of file F). To

maintain the corresponding parity vectors as well as be

consistent with the original file layout, the user can multiply

FbyAand thus generate the update information for both

the data vectors and parity vectors as follows:

FA¼

G

ð1Þ

;...;G

ðmÞ

;G

ðmþ1Þ

;...;G

ðnÞ

¼

F1;...;Fm;G

ðmþ1Þ

;...;G

ðnÞ

;

whereGðjÞ

ðj2fmþ1;...;ngÞdenotes the update infor-mation for the parity vectorGðjÞ

.

Because the data update operation inevitably affects some

or all of the remaining verification tokens, after preparation

of update information, the user has to amend those unused

tokens for each vector GðjÞ

to maintain the same storage

correctness assurance. In other words, for all the unused

tokens, the user needs to exclude every occurrence of the old

data block and replace it with the new one. Thanks to the

homomorphic construction of our verification token, the user

can perform the token update efficiently. To give more

details, suppose a blockGðjÞ

½Is, which is covered by the

specific tokenv

ðjÞ

i

, has been changed to GðjÞ

½IsþGðjÞ

½Is,

whereIs¼

k

ðiÞ

prp

ðsÞ. To maintain the usability of tokenv

ðjÞ

i

,itis

not hard to verify that the user can simply update it byv

ðjÞ

i

v

ðjÞ

i þ

s

i

GðjÞ

½Is, without retrieving other r1blocks

required in the precomputation ofv

ðjÞ

i

.

After the amendment to the affected tokens,

2

the user

needs to blind the update informationg

ðjÞ

i for each parity

block inðGðmþ1Þ

;...;GðnÞ

Þto hide the secret matrixPby

g

ðjÞ

i g

ðjÞ

i þfkj

ðs

ver

ij

Þ;i2f1;...;lg. Here, we use a new

seeds

ver

ij

for the PRF. The version numberverfunctions like

a counter which helps the user to keep track of the blind

information on the specific parity blocks. After blinding, the

user sends update information to the cloud servers, which

perform the update operation as GðjÞ GðjÞ þGðjÞ

;

ðj2f1;...;ngÞ.

Discussion.Note that by using the new seeds

ver

ij

for the

PRF functions every time (for a block update operation), we

can ensure the freshness of the random value embedded into

parity blocks. In other words, the cloud servers cannot

simply abstract away the random blinding information on

parity blocks by subtracting the old and newly updated

parity blocks. As a result, the secret matrixPis still being well

protected, and the guarantee of storage correctness remains.

4.2 Delete Operation

Sometimes, after being stored in the cloud, certain data

blocks may need to be deleted. The delete operation we are

considering is a general one, in which user replaces the

data block with zero or some special reserved data symbol.

From this point of view, the delete operation is actually a

special case of the data update operation, where the

original data blocks can be replaced with zeros or some

predetermined special blocks. Therefore, we can rely on the

226 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 2, APRIL-JUNE 2012

2. In practice, it is possible that only a fraction of tokens need

amendment, since the updated blocks may not be covered by all the tokens.

Fig. 2. Logical representation of data dynamics, including block update, append, and delete.

update procedure to support delete operation, i.e., by

settingfij in Fto befij. Also, all the affected tokens

have to be modified and the updated parity information

has to be blinded using the same method specified in an

update operation.

4.3 Append Operation

In some cases, the user may want to increase the size of his

stored data by adding blocks at the end of the data file,

which we refer as data append. We anticipate that the most

frequent append operation in cloud data storage is bulk

append, in which the user needs to upload a large number

of blocks (not a single block) at one time.

Given the file matrixFillustrated in file distribution

preparation, appending blocks toward the end of a data file

is equivalent to concatenate corresponding rows at the

bottom of the matrix layout for fileF(See Fig. 2). In the

beginning, there are only l rows in the file matrix. To

simplify the presentation, we suppose the user wants to

append mblocks at the end of file F, denoted as

ðflþ1;1;flþ1;2;...;flþ1;mÞ (We can always use zero-padding

to make a row of melements). With the secret matrixP,

the user can directly calculate the append blocks for each

parity server asðflþ1;1;...;flþ1;m

ÞP¼ðg

ðmþ1Þ

lþ1

;...;g

ðnÞ

lþ1

Þ.

To ensure the newly appended blocks are covered by

our challenge tokens, we need a slight modification to our

token precomputation. Specifically, we require the user to

expect the maximum size in blocks, denoted aslmax, for

each of his data vector. This idea of supporting block

append was first suggested by Ateniese et al. [14] in a

single server setting, and it relies on both the initial budget

for the maximum anticipated data size lmax in each

encoded data vector and the system parameter rmax¼

drðlmax=lÞe for each precomputed challenge-response

token. The precomputation of the ith token on server jis

modified as follows:vi ¼

Prmax

q¼1

q

i

GðjÞ

½Iq

, where

G

ðjÞ

½Iq¼

GðjÞ

½

k

ðiÞ

prp

ðqÞ; ½

k

ðiÞ

prp

ðqÞ l;

0; ½

k

ðiÞ

prp

ðqÞ>l;

(

and the PRP ðÞ is modified as: ðÞ: f0;1g

log

2

ðlmaxÞ

key!f0;1g

log

2

ðlmaxÞ

. This formula guarantees that on average,

there will berindices falling into the range of existinglblocks.

Because the cloud servers and the user have the agreement on

the number of existing blocks in each vectorGðjÞ

, servers will

follow exactly the above procedure when recomputing the

token values upon receiving user’s challenge request.

Now when the user is ready to append new blocks, i.e.,

both the file blocks and the corresponding parity blocks are

generated, the total length of each vector GðjÞ

will be

increased and fall into the range ½l; lmax. Therefore, the

user will update those affected tokens by adding

s

i

GðjÞ

½Is to the old vi wheneverGðjÞ

½Is 6¼0for Is >l,

whereIs ¼

k

ðiÞ

prp

ðsÞ. The parity blinding is similar as

introduced in update operation, and thus is omitted here.

4.4 Insert Operation

An insert operation to the data file refers to an append

operation at the desired index position while maintaining

the same data block structure for the whole data file, i.e.,

inserting a block F½j corresponds to shifting all blocks

starting with index jþ1 by one slot. Thus, an insert

operation may affect many rows in the logical data file

matrixF, and a substantial number of computations are

required to renumber all the subsequent blocks as well as

recompute the challenge-response tokens. Hence, a direct

insert operation is difficult to support.

In order to fully support block insertion operation,

recent work [15], [16] suggests utilizing additional data

structure (for example, Merkle Hash Tree [32]) to maintain

and enforce the block index information. Following this line

of research, we can circumvent the dilemma of our block

insertion by viewing each insertion as a logical append

operation at the end of fileF. Specifically, if we also use

additional data structure to maintain such logical to

physical block index mapping information, then all block

insertion can be treated via logical append operation,

which can be efficiently supported. On the other hand,

using the block index mapping information, the user can

still access or retrieve the file as it is. Note that as a tradeoff,

the extra data structure information has to be maintained

locally on the user side.

5SECURITYANALYSIS ANDPERFORMANCE

EVALUATION

In this section, we analyze our proposed scheme in terms of

correctness, security, and efficiency. Our security analysis

focuses on the adversary model defined in Section 2. We

also evaluate the efficiency of our scheme via implementa-tion of both file distribution preparation and verification

token precomputation.

5.1 Correctness Analysis

First, we analyze the correctness of the verification

procedure. Upon obtaining all the response R

ðjÞ

i

s from

servers and taking away the random blind values from

R

ðjÞ

i

ðj2fmþ1;...;ngÞ, the user relies on the equation

ðR

ð1Þ

i

;...;R

ðmÞ

i

ÞP¼

?

ðR

ðmþ1Þ

i

;...;R

ðnÞ

i

Þto ensure the storage

correctness. To see why this is true, we can rewrite the

equation according to the token computation:

Xr

q¼1

q

i

g

ð1Þ

Iq

;...;

Xr

q¼1

q

i

g

ðmÞ

Iq

!

P

¼

Xr

q¼1

q

i

g

ðmþ1Þ

Iq

;.. .;

Xr

q¼1

q

i

g

ðnÞ

Iq

!

;

and, hence, the left-hand side (LHS) of the equation

expands as

LHS¼

i;

2

i

;...;

r

i

g

ð1Þ

I1

g

ð2Þ

I1

... g

ðmÞ

I1

g

ð1Þ

I2

g

ð2Þ

I2

... g

ðmÞ

I2

. . .

.

.

.

g

ð1Þ

Ir

g

ð2Þ

Ir

... g

ðmÞ

Ir

0

B

@

1

C

A

P

¼

i;

2

i

;...;

r

i

g

ðmþ1Þ

I1

g

ðmþ2Þ

I1

... g

ðnÞ

I1

g

ðmþ1Þ

I2

g

ðmþ2Þ

I2

... g

ðnÞ

I2

. . .

.

.

.

g

ðmþ1Þ

Ir

g

ðmþ2Þ

Ir

... g

ðnÞ

Ir

0

B

@

1

C

A

;

WANG ET AL.: TOWARD SECURE AND DEPENDABLE STORAGE SERVICES IN CLOUD COMPUTING 227

which equals the right hand side as required. Thus, it is

clear to show that as long as each server operates on the

same specified subset of rows, the above checking equation

will always hold.

5.2 Security Strength

5.2.1 Detection Probability against Data Modification

In our scheme, servers are required to operate only on

specified rows in each challenge-response protocol execu-tion. We will show that this "sampling" strategy on selected

rows instead of all can greatly reduce the computational

overhead on the server, while maintaining high detection

probability for data corruption.

Supposenc servers are misbehaving due to the possible

compromise or Byzantine failure. In the following analysis,

we do not limit the value ofnc

, i.e., nc n. Thus, all the

analysis results hold even if all the servers are compromised.

We will leave the explanation on collusion resistance of our

scheme against this worst case scenario in a later section.

Assume the adversary modifies the data blocks inzrows out

of thelrows in the encoded file matrix. Letrbe the number

of different rows for which the user asks for checking in a

challenge. Let Xbe a discrete random variable that is

defined to be the number of rows chosen by the user that

matches the rows modified by the adversary. We first

analyze the matching probability that at least one of the rows

picked by the user matches one of the rows modified by the

adversary:P

r

m¼1PfX¼0g¼1

Qr1

i¼0

ð1minf

z

li

;1gÞ

1ðlz

l

Þ

r

.Ifnoneofthespecifiedr rows in the ith

verification process are deleted or modified, the adversary

avoids the detection.

Next, we study the probability of a false negative result

that there exists at least one invalid response calculated

from those specified r rows, but the checking equation

still holds. Consider the responses R

ð1Þ

i

;...;R

ðnÞ

i returned

from the data storage servers for the ith challenge, each

response valueR

ðjÞ

i , calculated within GFð2

p

Þ, is based on r

blocks on serverj. The number of responsesRðmþ1Þ

;...;R

ðnÞ

from parity servers is k¼nm. Thus, according to

proposition 2 of our previous work in [33], the false

negative probability isP

r

f¼Pr1þPr2, where Pr1¼

ð1þ2

p

Þ

nc1

2

nc1

andPr2¼ð1Pr1Þð2

p

Þ

k

.

Based on above discussion, it follows that the probability

of data modification detection across all storage servers is

Pd¼P

r

mð1P

r

f

Þ. Fig. 3 plots Pd for different values of

l; r; z while we setp¼16, nc ¼10, and k¼5.

3

From the

figure we can see that if more than a fraction of the data file

is corrupted, then it suffices to challenge for a small

constant number of rows in order to achieve detection with

high probability. For example, ifz¼1%ofl, every token

only needs to cover 460 indices in order to achieve the

detection probability of at least 99 percent.

5.2.2 Identification Probability for Misbehaving Servers

We have shown that, if the adversary modifies the data

blocks among any of the data storage servers, our sampling

checking scheme can successfully detect the attack with

high probability. As long as the data modification is

caught, the user will further determine which server is

malfunctioning. This can be achieved by comparing the

response valuesR

ðjÞ

i

with the prestored tokensv

ðjÞ

i

, where

j2f1;...;ng. The probability for error localization or

identifying misbehaving server(s) can be computed in a

similar way. It is the product of the matching probability

for sampling check and the probability of complementary

event for the false negative result. Obviously, the matching

probability isb P

r

m¼1

Qr1

i¼0

ð1minf

^ z

li

;1gÞ, where ^ zz.

Next, we consider the false negative probability that

R

ðjÞ

i ¼v

ðjÞ

i

when at least one of^ z blocks is modified.

According to [33, proposition 1], tokens calculated in

GFð2

p

Þ for two different data vectors collide with prob-ability b P

r

f ¼2

p

. Thus, the identification probability for

misbehaving server(s) isb Pd¼b P

r

mð1b P

r

f

Þ. Along with the

analysis in detection probability, ifz¼1%ofl and each

228 IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. 5, NO. 2, APRIL-JUNE 2012

3. Note that nc and k only affect the false negative probability P

r

f

.

However in our scheme, sincep¼16almost dominates the negligibility of

P

r

f

, the value of nc andkhave little effect in the plot ofPd.

Fig. 3. The detection probabilityPdagainst data modification. We showPdas a function ofl(the number of blocks on each cloud storage server) and

r(the number of rows queried by the user, shown as a percentage ofl) for two values ofz(the number of rows modified by the adversary). Both

graphs are plotted underp¼16, nc ¼10, and k¼5, but with different scale. (a) z¼1%ofl. (b) z¼10%ofl.

token covers 460 indices, the identification probability for

misbehaving servers is at least 99 percent. Note that if the

number of detected misbehaving servers is less than the

parity vectors, we can use erasure-correcting code to

recover the corrupted data, achieving storage dependability

as shown at Section 3.4 and Algorithm 3.

5.2.3 Security Strength against Worst Case Scenario

We now explain why it is a must to blind the parity blocks

and how our proposed schemes achieve collusion resistance

against the worst case scenario in the adversary model.

Recall that in the file distribution preparation, the

redundancy parity vectors are calculated via multiplying

the file matrix Fby P, where Pis the secret parity

generation matrix we later rely on for storage correctness

assurance. If we disperse all the generated vectors directly

after token precomputation, i.e., without blinding, mal-icious servers that collaborate can reconstruct the secret P

matrix easily: they can pick blocks from the same rows

among the data and parity vectors to establish a set ofmk

linear equations and solve for the mkentries of the parity

generation matrixP. Once they have the knowledge of P,

those malicious servers can consequently modify any part

of the data blocks and calculate the corresponding parity

blocks, and vice versa, making their codeword relationship

always consistent. Therefore, our storage correctness

challenge scheme would be undermined—even if those

modified blocks are covered by the specified rows, the

storage correctness check equation would always hold.

To prevent colluding servers from recoveringPand

making up consistently-related data and parity blocks, we

utilize the technique of adding random perturbations to the

encoded file matrix and hence hide the secret matrixP.We

make use of a keyed pseudorandom functionfkj

ðÞ with

keykj and seeds

ver

ij , both of which has been introduced

previously. In order to maintain the systematic layout of

data file, we only blind the parity blocks with random

perturbations (We can also only blind data blocks and

achieve privacy-preserving third party auditing, as shown

in Section 3.5). Our purpose is to add "noise" to the set of

linear equations and make it computationally infeasible to

solve for the correct secret matrixP. By blinding each parity

block with random perturbation, the malicious servers no

longer have all the necessary information to build up the

correct linear equation groups and therefore cannot derive

the secret matrix P.

5.3 Performance Evaluation

We now assess the performance of the proposed storage

auditing scheme. We focus on the cost of file distribution

preparation as well as the token generation. Our experiment

is conducted on a system with an Intel Core 2 processor

running at 1.86 GHz, 2,048 MB of RAM, and a 7,200 RPM

Western Digital 250 GB Serial ATA drive. Algorithms are

implemented using open-source erasure coding library

Jerasure [34] written in C. All results represent the mean

of 20 trials.

5.3.1 File Distribution Preparation

As discussed, file distribution preparation includes the

generation of parity vectors (the encoding part) as well as

the corresponding parity blinding part. We consider two

sets of different parameters for the ðm; kÞ Reed-Solomon

encoding, both of which work overGFð2

16

Þ. Fig. 4 shows

the total cost for preparing a 1 GB file before outsourcing. In

the figure on the left, we set the number of data vectors m

constant at 10, while decreasing the number of parity

vectorskfrom 10 to 2. In the one on the right, we keep the

total number of data and parity vectors mþkfixed at 22,

and change the number of data vectorsmfrom 18 to 10.

From the figu



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now