Dynamic Auditing And Accounting Mechanism

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract-Cloud computing is an advancement in the field of information technology.The terminology cloud computing can be illustrated as pay-as-you-use model in which this framework includes collection task where they are assigned to the clients who needs it.Without the features of internet its impossible to provide a cloud environment.Lots of gain can be achieved through the cloud such as scalabilty, availability,flexible,lots of storage space,low cost and so on.Although it embraces these benefits the major venture of cloud technology is that security and confidential of data stored and shared within or between the third party environment.In order to overcome these problem the auditability scheme can be endowed where this methodology supports to check the integrity of the data being stored and allows for the user data privacy.Further our work also proposes the accounting mechanism for metering the resource usages where the system encloses user data along with their policies.The result of accounting mechanism is the generation of the log record.

Index terms:Accounting,Auditing

I.INTRODUCTION

Cloud computing is an environment where software applications,processing power, data and potentially even artificial intelligence are accessed over the Internet. Many private exchanging messages and sharing photos and video on social networking sites like Facebook is very common[11].However, these types of cloud computing activities are just the beginning.Indeed, it is likely that within a decade the vast majority of personal and business computing will be Internet based which is the backbone of cloud.Cloud computing is dynamically scalable because users only have to consume the amount of online resources they actually want.Just as we are used to drawing as much or as little electricity as we need from the power grid, so anybody can now obtain as many or as few computing resources from the cloud as they require at any particular point of time.

One of its major disadvantage is the security related issues where users fear of losing the control over their own data.To overcome this problem there is a need for auditing the resources which is being stored in the third party environment and monitor the usage of resources by using accounting schemes.For example,users need to be able to ensure that their data are handled according to the service level agreements made at the time they sign on for services in the cloud.The another major issue in cloud is the cloud service providers.The cloud service providers can be described as a person or an organization which is responsible for making a service available to interested parties.A Cloud provider acquires and manages the computing infrastructure required for providing the services,runs the cloud software that provides the services and makes arrangement to deliver the cloud services to the cloud consumers through network access[24].The following are the major disadvantages of CSP[25]

No influence on maintenance levels and fixes frequency when using cloud services from a CSP.

No or little insight in CSP contingency procedures. Especially backup,restore and disaster recovery.

Measurement of resource usage and end user activities lies in the hands of CSP.

No easy migration to another CSP.

The solution for the above challenges can be achieved by using an approach named Secured Cloud Accountability framework[1].Information accountability allows for hide-it-or-lose-it perspective where our proposed framework supports end-to-end accountability in a distributed manner.The Secure cloud accounting framework conforms the security of data being send to the cloud storage by using the Cipher text policy based encryption[15].In addition to the above security features the framework also permits to monitor the usage of data.The usage of data can be tracked based on the service level agreements beside with this characteristic the service level agreements can be updated dynamically. Related to the accountability feature,the framework also includes the TPA which performs the auditing of the data being in the cloud storage.This system encourages public auditing by proposing a protocol which enables dynamic data operation to support block insertion.

II.RELATED WORK

The succeeding section gives an overview about various security and privacy issues related to cloud.

A Issues related to privacy

Remote data integrity checking is a protocol that focuses on how frequently and efficiently we verify whether cloud server can faithfully store the user’s data without retrieving it [4]. The legal implications of data and applications being held by a third party are complex and not well understood. There is also a potential lack of control and transparency when a third party holds the data. Part of the hype of cloud computing is that the cloud can be implementation independent, but in reality regulatory compliance requires transparency into the cloud [5].Data leakages out of cloud computing environments are fundamental cloud security concerns for both the end- users and the cloud service providers. An effective and flexible distributed scheme with explicit dynamic data support to ensure the correctness of users’ data in the cloud. We rely on erasure-correcting code in the file distribution preparation to provide redundancies and guarantee the data dependability. This construction drastically reduces the communication and storage overhead as compared to the traditional replication-based file distribution techniques [6]. A data protection scheme with public auditing scheme is outlined that will address a number of these factors, by providing a mechanism to allow for data to be encrypted in the Cloud without loss of accessibility or functionality for authorized parties. This scheme is not necessarily a replacement for traditional privacy and security measures for data [7]. For cloud computing services to become successful and sustainable, we need a systematic framework for verifiable resource accounting. It eases any concerns that customers may have with providers’ pricing and performance guarantees. It also gives customers a basis for accurately comparing different cloud providers [8].

B Other issues in maintaining privacy

Privacy and security in the cloud denotes a big challenge. By encrypting the data it strengthens the security already in place though the cloud. Through encryption you also have a means of ensuring your data can be destroyed if necessary.Risk management of the data throughout its life cycle is crucial[9].The newly proposed scheme, doesn’t allow user’s rating to stored in the cloud. Our scheme does not rely on any trusted third party for threshold decryption by allowing the users to encrypt and decrypt a prediction query and its results respectively.The results shows that the user can obtain predictions quickly despite encrypted rating queries [12].A new cryptographic framework that can protect data both in cloud storage and cloud-based applications. The implementation of such a framework enables users to enjoy the tremendous benefits of cloud services while at the same time having their data protected[13]. Any TPA in possession of the public key can act as a verifier. We assume that TPA is unbiased while the server is untrusted. For application purposes, the clients may interact with the cloud servers via CSP to access or retrieve their pre-stored data. More importantly, in practical scenarios, the client may frequently perform block-level operations on the data files.

III.SECURE CLOUD FRAMEWORK

The security of the cloud environment can be improved by means of allowing both the accounting and auditing information to the data owners.Lets discuss about the features of this framework.

A. Key factors of Cloud framework

The construction of the framework involves diverse of components such as the data,users,logger and harmonizer.

Data:In cloud environment data can be an image,text files, documents and so on.

Users:The users are the consumers of cloud services.

Logger:The logger is the component which is strongly coupled with the user’s data, so that it is downloaded when the data are accessed, and is copied whenever the data are copied.

Third party auditor: The auditing function is handled by the TPA which is an external auditor. It provides the auditing message as the result to the data owners.

B.Proposed work

The basic flow of the operation is shown in the figure 1.The secure cloud framework allows for both the metering and auditing of the resources.The accounting services measures the performance and consumption of the resources which helps for optimize cloud service delivery [14]. In addition to the accounting services this framework also includes an auditing mechanism. Its intention is to increase the level of security.

Fig 1 Secured cloud framework architecture

A third party auditor is responsible for performing the auditing.The CloudAudit is responsible for cloud service providers with a way to make their performance and security data readily available for potential customers. The specification provides a standard way to present and share detailed, automated statistics about performance and security[15].

IV.ACCOUNTING MECHANISM

The first phase is to implement the accounting mechanism where it is described as follows.Initially each owner of the resources encrypt their data using the Cipher text policy based encryption[16]. This encryption doesn’t need any third party to provide the encrypted data that is no need of generate the key and encrypt.In this mechanism an encryptor expreess the access policy where it decides what kind of recivers can decrypt the original information.These kind of access policies are stated as the service level agreements in cloud environment[1].Whenever an SLA is formed, the level of risk incurred is based on how well the offered service terms meet the organizational security demands.At runtime, whenever a cloud or service violates its SLA with respect to security controls or cancels any security offerings, the risk of noncompliance with organizational security policies increases[17].The encrypted data is now stored in logger component along with their service level agreements.The access policies along with the JAR file is send to the CSP.The authenticity of the CSP is provided based on the SAML based certificate authority[26].This trusted identity provider issues certificates verifying the user’s identity based on his username.Only after the completion of authentication succeeds the users or service provider is made to access the data enclosed with JAR file. As for the logging, each time there is an access to the data, the JAR will automatically generate a log record,encrypt it using the public key distributed by the data owner, and store it along with the data. The encryption of the log file prevents unauthorized changes to the file by attackers. In addition, some error correction information will be sent to the log harmonizer to handle possible log file corruption. To ensure trustworthiness of the logs, each record is signed by the entity accessing the content.The log files which are encrypted can later be decrypted and their integrity verified.

A. Log record generation

Log records are generated by the logger component. Logging occurs at any access to the data in the JAR, and new log entries are appended sequentially, in order of creation LR ¼ hr1; . . . ; rki.Each record ri is encrypted individually and appended to the log file. In particular, a log record takes the following form:

Ri=(ID,Act,T,Loc,h((ID,Act,Loc,)|ri-1|…|r1),sig)

to the checksum of the records preceding the newly inserted one, concatenated with the main content of the record itself.The checksum is computed using a collision-free hash function [27].The component sig denotes the signature of the record created by the server. This type of log record ensures the accounting feature to the data owner and allows to check whether their datas are handled according to the service level agreements.

V.AUDITING MECHANISM

The next phase of our work is perform an efficient auditing mechanism where it assures the integrity of the data being stored in the cloud. To audit the cloud environment we need a third party auditor.The external auditor supports that the cloud provider is constantly following policies, procedures and processes, which are defined by the customer, to meet their business requirements.

A. Auditors role

The audit functions can be handed over to a third party whom the customer as well as provider trusts. Before conducting audits on a cloud, the customer must identify what are the expectations of his/her internal audit department and what are the expectations of the external auditor with respect to meeting the internal expectations.

To implement a third-party audit for cloud computing operations, the owner of the data must set up a private key and a public key. The private key allows total access to the data stored in the cloud. The public key allows access to certain blocks of data that the independent verifier will use to test the security, integrity and vulnerability of the data stored.The verifiers can detect modifications, corrupt files and deletions[10].

B. Protocol proposed

An effective public auditability protocol is proposed to ensure the data integrity.Along with that dynamic data operation is allowed to the clients to perform block-level operations on the data files while maintaining the same level of data correctness assurance[3].

C.Auditing protocol

To implement the public auditing mechanism the data being used is to be segmented as blocks to verify the integrity.In order to divide these files into blocks the Merkle hash tree is being used.The main functionality of this to allow efficient and secure verification of the contents of larger data structure. It is constructed as a binary tree where the leaves in the MHT are the hashes of authentic data values[3].MHT is commonly used to authenticate the values of data blocks.

The receiver knows the public key pub, the message M,and the signature sig=(sig′||auth0||auth1||...||authn−1). At first, the receiver verifies the one-time signature sig′ of the message M. If sig′ is a valid signature of M, the receiver computes A0=H(Yi) by hashing the public key of the one-time signature. For j=1, ..,n−1, the nodes of Aj of the path A are computed with Aj=H(aj−1||bj−1). If An equals the public key pub of the merkle signature scheme, the signature is valid[18].

Fig 2 Merkle hash tree

D.Key terms

The execution of the algorithm can be justified by using the following key terms.

Setup: The client’s public key and private key are generated by invoking KeyGen(.).By running SigGen(.),the data file F is pre processed, and the homomorphic authenticators together with metadata are produced.

KeyGen(1k):The client generates a random signing key pair(spk,ssk).Choose a random α←Zp and compute v←gα.The secret key is

sk= (α,ssk) and the public key is pk=(v,spk)

SigGen(sk,F): Given F= (m1,m2 ...,mn), the client chooses a random element u←G.Let t=

name||n||u|| SSigssk(name||n||u) be the file tag for F. Then the client computes signature σi for each block mi(i= 1,2,...,n) as σi←(H(mi). umi)α.Denote the set of signatures by Φ ={σi},1≤i≤n.The client then generates a root R based on the construction of Merkle Hash Tree (MHT), where the leave nodes of the tree are an ordered set of hashes of "file tags" H(mi) (i= 1,2,...,n).Next, the client signs the root R under the private key α:sigsk(H(R))←(H(R))α.The client sends {F,t,Φ,sigsk(H(R))} to the server and deletes {F,Φ,sigsk(H(R))}from its local storage .

Default Integrity Verification:The client or TPA can verify the integrity of the outsourced data by challenging the server. Before challenging, the TPA first use spk to verify the signature on t.If the verification fails, reject by emitting FALSE ;otherwise, recover u .To generate the message " chal ", the TPA (verifier) picks a random c element subset I ={ s 1,s2 ,...,s c} of set[1,n], where we assume s1 ≤….. ≤ s c . For each i ∈ I the TPA chooses a random element νi←B ⊆ Zp. The message"chal" specifies the positions of the blocks to be checked in this challenge phase.The verifier sends the chal{(i,νi)} s1≤i≤sc to the prover (server). GenProof(F,Φ,chal).Upon receiving the challenge chal={(i,νi)}s1≤i≤sc,the server computes

sc sc

µ=∑ vimi Zp and σ = ∏σivi Gi

i=s1 i=s1

where both the data blocks and the corresponding signature blocks are aggregated into a single block, respectively. In addition, the prover will also provide the verifier with a small amount of auxiliary information {Ωi}s1≤i≤sc, which are the node siblings on the path from the leaves{h(H(mi))}s1≤i≤sc to the root R of the MHT. The prover responds the verifier with proof

P={µ,σ,{H(mi),Ωi}s1≤ sc,sigsk(H(R))}

Verify proof{Pk,chal,P}:Upon receiving the responses from the prover, the verifier generates root R using {H(mi),Ωi}s1≤i≤sc and authenticates it by checking e(sigsk(H(R)),g)=?e(H(R),gα).If the authentication fails, the verifier rejects by emitting FALSE. Otherwise the verifier checks

sc

e (σ,g)=?e(∏ H(mi)vi.uµ,v)

i=s1

If so, output TRUE ; otherwise FALSE. The protocol is illustrated as follows.

Table 1 Protocol proposed

Ensuring Integrity of data when modified: Now we show how our scheme can explicitly and efficiently handle fully dynamic data operations including data modification (M), data insertion ( I ) and data deletion ( D ) for cloud data storage. Note that in the following descriptions, we assume that the file F and the signature Φ have already been generated and properly stored at server. The root metadata R has been signed by the client and stored at the cloud server, so that anyone who has the client’s public key can challenge the correctness of data storage.

Data Deletion: Similar to insertion deletion is the complex one.If there is a need to delete single block, it refers to deleting the specified block and moving all the other blocks one block forward. Suppose the server receives the update request for deleting block mi, it will delete mi from its storage space, delete the leaf node h(H(mi)) in the MHT and generate the new root metadata R..

Batch Auditing for Multi-client Data:Since cloud servers concurrently handle multiple clients the verification process being done is also handled as multiple verification sessions.Lets consider K signatures on K distinct data files from K clients, it is more easier to aggregate all these signatures into a single one and verify it at one time.The key idea is to use the bilinear aggregate signature scheme [28]. As in the BLS based construction, the aggregate signature scheme allows the creation of signatures on arbitrary distinct messages.Moreover, it supports the aggregation of multiple signatures by distinct signers on distinct messages into a single short signature, and thus greatly reduces the communication cost while providing efficient verification for the authenticity of all messages.

VI.RESULTS

The implementation of the secure cloud framework can be explained as follows.

Initially the Secured Cloud Framework is shown which includes the auditing and accounting mechanism.

Fig 3 Secure cloud accounting and auditing framework

The next step is to include the users task such as upload,download,search a file,delete a file and authentication for the data being stored in the cloud storage server.

Fig 4 Tasks performed by users

Each time when there is a need for authentication it can be performed by providing the key values.The authentication is performed as block tag process.

Fig 5 Merkle block tag authentication

For dynamic updation of the data’s the insertion and deletion of block is allowed.

Fig 6 Dynamic updation

When the updation is finished then it allows the server to indicate the result of the auditing mechanism.It includes all the details about the updation.

Fig 7 Server process

Now the third party is allowed to perform the auditing and the result is being provided.

Fig 8 TPA auditing

The below shown specifies the accounting framework model.

Fig 9 Accounting framework

Finally the information of resource usage is obtained by the data owner by the accounting mechanism.

Fig 10 Generation of the log record

VII.CONCLUSION

The assurance cloud security is achieved by imposing both the auditing and accounting features.This are the major two features considered to solve the security related issues in cloud.Finally we achieved the monitoring process in cloud environment by proposing an efficient accounting mechanism which generates the log record as the result of the accounting function.In addition to that it increases the level of security by allowing the data owners to specify their service level agreements which is the contract between the service providers and users to maintain their data secure. To improve the level of security the proposed work also allows to verify the integrity of the cloud storage.This can be achieved on the basis of allowing a third party auditor to audit the data.And also it supports for the batch auditing and dynamic modification of the data in the cloud.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now