Evaluation Of Distributed Accountability For Data

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

CLOUD computing presents a new way to supplement the current consumption and delivery model for IT services based on the Internet, by providing for dynamically scalable and often virtualized resources as a service over the Internet. A major feature of the cloud services is that users’ Data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new technology, users also start worrying about losing control of their own data. The data processed on clouds are often outsourced, leading to a number of issues related to accountability, including the handling of personally identifiable information. Such fears are becoming a significant barrier to the wide adoption of cloud services. To address the above problem, Cloud Information Accountability (CIA) framework, based on the notion of information accountability approach is proposed to keep track of the actual usage of the users’ data in the cloud. This CIA framework provides end-to-end accountability in a highly distributed fashion. In particular, an object-centered approach that enables enclosing this logging mechanism together with users’ data and policies is also proposed. Programmable capability of JAR (Java ARchives) files to automatically log the usage of the users’ data by any entity in the cloud. Users will send their data along with any policies such as access control policies and logging policies that they want to enforce, enclosed in JAR files, to cloud service providers. Any access to the data will trigger an automated and authenticated logging mechanism local to the JARs. To strengthen user’s control, distributed auditing mechanisms is also provided.

Objectives:

To avoid losing control of their own data, a novel highly decentralized information accountability framework is proposed to keep track of the actual usage of the users’ data in the cloud.

To enclose the logging mechanism together with users’ data and policies an object-centered approach is proposed.

To strengthen user’s control, distributed auditing mechanisms is provided.

To improve the efficiency and effectiveness.

Contribution:

This project proposes a novel automatic and enforceable logging mechanism in the cloud. This is the first time a systematic approach to data accountability through the novel usage of JAR files is proposed. This proposed architecture is platform Independent and highly decentralized, in that it does not require any dedicated authentication or storage system in place.

Problem Statement:

A user, who subscribed to a certain cloud service, usually needs to send his/her Data as well as associated access control policies (if any) to the service provider. After the data are received by the cloud service provider, the service provider will have granted

Access rights, such as read, write, and copy, on the data. Using conventional access Control mechanisms, once the access rights are granted, the data will be fully available at the service provider in order to track the actual usage of the data.

EXISTING SYSTEM:

Conventional access control approaches developed for closed domains such as Databases and operating systems, or approaches using a centralized server in distributed Environments are not suitable, due to the following features characterizing cloud Environments. First, data handling can be outsourced by the direct cloud service provider (CSP) to other entities in the cloud and theses entities can also delegate the tasks to others, and so on. Second, entities are allowed to join and leave the cloud in a flexible Manner. In Java-based techniques for security, this methods are related to self- defending Objects (SDO). Self-defending objects are an extension of the object-oriented Programming paradigm, where software objects that offer sensitive functions or hold Sensitive data are responsible for protecting those functions/data. Similarly, this project also extends the concepts of object-oriented programming. The key difference is that the authors still rely on a centralized database to maintain the access records, while the items being protected are held as separate files. In previous work, a Java-based approach is provided to prevent privacy leakage from indexing which could be integrated with the CIA framework proposed in this work since they build on related architectures.

Disadvantage:

The data processed on clouds are often outsourced, leading to a number of issues related to accountability, including the handling of personally identifiable Information. Such fears are becoming a significant barrier to the wide adoption of cloud Services.

Data handling in the cloud goes through a complex and dynamic hierarchical Service chain which does not exist in conventional environments. The user cannot have any control over his /her own data.

PROPOSED SYSTEM:

A novel approach, namely Cloud Information Accountability (CIA) framework, based on the notion of information accountability is proposed this system. Unlike privacy Protection technologies which are built on the hide-it-or-lose-it perspective, information Accountability focuses on keeping the data usage transparent and track able. This proposed CIA framework provides end-to-end accountability in a highly distributed fashion. One of the main innovative features of the CIA framework lies in its ability of maintaining Lightweight and powerful accountability that combines aspects of access control, usage control and authentication. By means of the CIA, data owners can track not only whether or not the service-level agreements are being honored, but also enforce access and usage control rules as needed. Associated with the accountability feature, this system also develops two distinct modes for auditing: push mode and pull mode. The push mode refers to logs being periodically sent to the data owner or stakeholder while the pull mode refers to an alternative approach whereby the user (or another authorized party) can retrieve the logs as needed. The design of the CIA framework presents substantial challenges, including uniquely identifying CSPs, ensuring the reliability of the log, adapting to a highly Decentralized infrastructure, etc. The basic approach toward addressing these issues is to leverage and extend the programmable capability of JAR (Java Archives) files to automatically log the usage of the users’ data by any entity in the cloud. Users will send their data along with any policies such as access control policies and logging policies that they want to an alternative approach whereby the user (or another authorized party) can retrieve the logs as needed.

Advantage:

Every access to the user’s data should be correctly and automatically logged. This requires integrated techniques to authenticate the entity that the data, verify, and record the actual operations on the data as well as the time that the data have been accessed.

Log files should be reliable and tamper proof to avoid illegal insertion, deletion and modification by malicious parties. Recovery mechanisms are also desirable to restore damaged log files caused by Technical problems. Log files should be sent back to their data owners periodically to inform them of the current usage of their data.

More importantly, log files should be retrievable anytime by their data owners when needed regardless the location where the files are stored.

The proposed technique should not intrusively monitor data recipients’ neither systems, nor it should introduce heavy communication and computation Overhead, which otherwise will hinder its feasibility and adoption in practice.

INTRODUCTION:

Cloud computing refers to a computing platform that is able to dynamically provide, configure, and reconfigure servers to address a wide range of needs, ranging from scientific research to e-commerce. While cloud computing is expanding rapidly as service used by a great many individuals and organizations internationally, policy issues related to cloud computing are not being widely discussed or considered. Details of the services provided are abstracted from the users who no longer need to be experts of technology infrastructure. Moreover, users may not know the machines which actually process and host their data. While enjoying the convenience brought by this new technology, users also start worrying about losing control of their own data. The data processed on clouds are often outsourced, leading to a number of issues related to accountability, including the handling of personally identifiable information. Such fears are becoming a significant barrier to the wide adoption of cloud services. To allay users’ concerns, it is essential to provide and effective mechanism for users to monitor the usage of their data in the cloud. Users need to be able to ensure that their data are handled according to the service level agreements made at the time they sign on for services in the cloud.

Conventional access control approaches developed for closed domains such as databases and operating systems, or approaches using a centralized server in distributed environments, are not suitable, due to the following features characterizing cloud environments. First, data handling can be outsourced by the direct cloud service provider (CSP) to other entities in the cloud and theses entities can also delegate the tasks to others, and so on. Second, entities are allowed to join and leave the cloud in a flexible manner. As a result, data handling in the cloud goes through a complex and dynamic hierarchical service chain which does not exist in conventional environments. To overcome the above problems, a novel approach, namely Cloud Information Accountability (CIA) framework is proposed, based on the notion of information accountability. Unlike privacy protection technologies which are built on the hide-it-or-lose-it perspective, information accountability focuses on keeping the data usage transparent and trackable. This proposed CIA framework provides end-to-end accountability in a highly distributed fashion. One of the main innovative features of the CIA framework lies in its ability of maintaining lightweight and powerful accountability that combines aspects of access control, usage control and authentication. By means of the CIA, data owners can track not only whether or not the service-level agreements are being honored, but also enforce access and usage control rules as needed. Associated with the accountability feature, this system also develops two distinct modes for auditing: push mode and pull mode. The push mode refers to logs being periodically sent to the data owner or stakeholder while the pull mode refers to an alternative approach whereby the user (or another authorized party) can retrieve the logs as needed.

LITERATURE REVIEW:

Title : Provable Data Possession at Untrusted Stores

Author: Giuseppe Ateniese, Randal Burns, Reza Curtmola, Joseph Herring, Lea Kissner, Zachary Peterson, Dawn Song

Verifying the authenticity of data has emerged as a critical issue in storing data on untrusted servers. It arises in peer-to-peer storage systems, network file systems, long-term archives, web-service object stores, and database systems. Such systems prevent storage servers from misrepresenting or modifying data by providing authenticity checks when accessing data. It introduce a model for provable data possession (PDP) that allows a client that has stored data at an untrusted server to verify that the server possesses the original data without retrieving it. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking supports large data sets in widely-distributed storage systems. It presents two provably-secure PDP schemes that are more efficient than previous solutions, even when compared with schemes that achieve weaker guarantees. In particular, the overhead at the server is low as opposed to linear in the size of the data.

Advantages:

It presents two provably-secure PDP schemes that are more efficient even when compared with schemes that achieve weaker guarantees.

Provides less overhead.

Disadvantages:

Imposes a significant I/O and computational burden on the server.

Title : An Open Framework for Foundational Proof-Carrying Code

Author: Xinyu Feng, Zhaozhong Ni, Zhong Shao and Yu Guo

Foundational certified systems are packages containing machine code and mechanized proofs about safety properties. Building foundational certified systems is hard because software systems often use many different computation features (stacks and heaps, strong and weak memory update, first- and higher-order function pointers, sequential and concurrent control flows, etc.), and span different abstraction levels (e.g., user level code and run-time system code such as thread schedulers and garbage collectors). An open framework called OCAP, for developing foundational proof-carrying code. OCAP is the first FPCC framework which systematically supports interoperation of different verification systems is proposed in this paper. It lays a set of Hoare-style inference rules above the raw machine semantics, so that proofs can be constructed following these rules instead of directly using the mechanized meta-logic. Soundness of these rules are proved in the metalogic framework with machine-checkable proof, therefore these rules are not trusted. OCAP is modular, extensible and expressive, therefore it satisfies all the requirements mentioned for an open framework.

Advantages:

OCAP also supports separate verification of program modules in different foreign systems.

The assertion language for OCAP is expressive enough to specify the invariants enforced in foreign verification systems.

Disadvantages:

OCAP can not be applied for a large set of applications.

Title : Decentralized Trust Management and Accountability in Federated Systems

Author: Brent N. Chun and Andy Bavier

In federated systems, trust management must also be accompanied with accountability. Whether intentional or not, misuse of shared resources is inevitable, in particular in research settings where experimental network services and measurement studies often use the network in unusual ways. While the intent is rarely malicious, anomalous use of the network is often irresponsible and naive. In this paper, they presented a layered architecture for addressing the end-to-end trust management and accountability problem. In this context, the three subproblems faced are: (i) expressing and verifying trust in a flexible, scalable, and accountable manner, (ii) monitoring trust relationships over time so that misuse of trust can be detected and (iii) managing and reevaluating trust relationships based on automatic detection of misuse of trust. In wide-area network testbeds, for example, these subproblems can be cast as (i) how are principals authorized to use resources in the system, (ii) how do monitor use of these resources so that abusive behavior (e.g., scanning a remote network for valid IP addresses) can be tracked down, and (iii) how is abusive behavior automatically detected and handled before it escalates to a point where formal complaints are made (e.g., from external ISPs). Addressing these three problems will be key in sustaining long-term growth and to avoid large amounts of traffic filtering by disgruntled ISPs.

Advantages:

Decentralized trust management and accountability facilities should allow expressing and verifying trust in a flexible and scalable manner.

It monitors the use of trust relationships over time, and manage and reevaluate trust relationships based on historical traces of past behavior in a fully accountable manner.

Disadvantages:

To provide full accountability, federated systems require aggregation at multiple levels both on individual nodes and correlated across nodes for distributed anomaly detection. Addressing this problem in an efficient and effective manner is still an open research problem.

Title : Provenance Management in Curated Databases

Author: Peter Buneman, Adriane P. Chapman and James Cheney

Modern science is becoming increasingly dependent on databases. This poses new challenges for database technology, many of them to do with scale and distributed processing. However there are other issues concerned with the preservation of the "scientific record" – how and from where information was obtained. These issues are particularly important as database technology is employed not just to provide access to source data, but also to the derived knowledge of scientists who have interpreted the data. Many scientists believe that provenance, or metadata describing creation, recording, ownership, processing, or version history, is essential for assessing the value of such data. However, provenance management is not well understood; there are few guidelines concerning what information should be retained and how it should be managed. Current database technology provides little assistance for managing provenance. In this paper, a realistic approach to automatic provenance tracking in curated databases is proposed. They have implemented their approach and conducted an experimental evaluation of several methods of storing and managing provenance. The most naive approach, investigated has relatively high storage cost (storage overhead is proportional to the amount of data touched by an update), moderate processing cost (overhead of up to 30% of update processing time), and even simple provenance queries are fairly expensive to answer. However, the hierarchical-transactional technique reduced the storage overhead this experiments by around a factor of 5, while decreasing the processing overhead per update operation to at most 6% and providing improved performance on provenance queries.

Advantages:

Provenance can be tracked and managed efficiently.

It increases the reliability and transparency of the scientific record.

Title : A Posteriori Compliance Control

Author: Sandro Etalle and William H. Winsborough

In this paper they present APPLE (A Posteriori PoLicy Enforcement), a framework for policy specification and end-to-end policy enforcement in collaborative environments. In APPLE, the burden of preventing violations of established policies is not assigned to a trusted component, but rests with the user, who is not prevented from acting wrongly, but is held accountable for her actions. Distributed auditing authorities routinely check whether users have obtained and used their data in accordance with the applicable policies. When audited, a user shows her log to demonstrate that (a) she possessed the policies allowing her to carry out the actions she performed, (b) she fulfilled the obligations required by these policies and (c) she acquired the policies from trustworthy sources. There are three critical components of a-posteriori policy enforcement: logging, auditing, and accountability. Logging records actions taken by users. A log is the history of user’s action as recorded by the infrastructure. Auditing is the process whereby those logs are interrogated to determine whether the logs are consistent with the policies associated with transmitted documents and with observations made by the auditing authority about the apparent relationships between transmitted documents and other documents available to the user. Accountability is the property of a user of being susceptible to penalty should misbehavior be detected.

Advantages:

This framework combines a logic for accountability with trust management to provide a flexible system in which policies include administration rights and one can define groups in a simple and effective way.

Disadvantages:

It does not prevent illegitimate behavior, but rather deters it.

It does not preclude transgressions.

Title : Towards a theory of accountability and audit

Author: Radha Jagadeesan, Alan Jeffrey, Corin Pitcher and James Riely

In this paper, two contributions toward bringing formal foundations to the study of accountability were proposed. First, describe an operational model of accountability based systems. Honest and dishonest principals are described as agents in a distributed system where the communication model guarantees point-to-point integrity and authenticity. Auditors and other trusted agents (such as trusted third parties) are also modeled internally as agents. Behaviors of all agents are described as processes in process algebra with discrete time. Auditor implementability is ensured by forcing auditor behavior to be completely determined by the messages that it receives. Second, describe analyses to support the design of accountability systems and the validation of auditors for finitary systems (those with finitely many principals running finite state processes with finitely many message kinds). Then compile finitary systems to (turn-based) games and use alternating temporal logic to specify the properties of interest. This permits us to adapt existing model-checking algorithms for verification. The results provide the foundations necessary to explore tradeoffs in the design of mechanisms that ensure accountability. The potentially conflicting design parameters include the efficiency of the audit, the amount of logging, and the required use of message signing, watermarking, or trusted third parties. Design choices place constraints on the auditor, the agents of the system and the underlying communication infrastructure.

Advantages:

This audit based accountability systems to explore the tradeoffs between the requirements on the honest principals, the guarantees provided by the communication network, and the precision demanded of the audit protocol.

Disadvantages:

The full integration with cryptographic primitives in the operational model is not addressed.

Efficient audits of large datasets are not achieved.

SOFTWARE REQUIREMENT SPECIFICATION:

1.1 Purpose:

The Main concepts that are proposed were to construct a Cloud Information Accountability (CIA) which provides an efficient logging and auditing control. It also provides error control mechanism.

Document Conventions:

This document is written in the following style.

Font Style : Times New Roman

Headings : 12 size, Bold

Sub-Headings : 12 size, Bold

Description : 12 size

Intended Audience and Reading Suggestions:

This document is intended for construction of an efficient logging and log harmonizer. The System provides best logging and auditing mechanism of user’s data which achieves dynamic scalability. It also provides error correction mechanism.

Project Scope:

The project initially collects requirements to store and maintain client’s data in the cloud network. After that it will analyze and create an efficient logging and auditing mechanism for accountability. Later it will provide efficient way of providing security with privacy protection.

2 Overall Descriptions:

2.1 Product Perspective:

First the key is generated for encrypting the user data. This includes the log record formation. Second the encrypted data is stored in the JAR file. Third the JAR gives access to the Cloud Service Provider. Then finally auditing takes place.

2.2 Product Features:

The product features are listed below,

Highly secure -> the architecture provides reliable and more secured network.

Communication -> Server and registered users can communicate each other using communication link.

User friendly -> The Architecture is simple and allows users to access the project easily.

2.3 User Classes Characteristics:

The user privileges vary according to their designations. Basic knowledge of using computers is adequate to use this application. Knowledge of how to handle the system is necessary. The user interface will be friendly enough to guide the user.

2.4 Operating Environment:

This system needs the following specification.

2.4.1 Hardware Specification:

Processor : Pentium IV 500MHz.

Monitor : SVGA

RAM : 2GB or 4GB

Secondary Storage : 520GB

Speed : 3.2GHz

2.4.2 Software Requirements:

Language : Java

Operating System : Win 2000/xp

Back-End : Ms-Access

Front-End : Java Swing

2.5 Design and Implementation Constraints

The system needs good network connection i.e. minimum network traffic. The connection between users to database and database to user is fast and safe. Registered users information’s are confidential. The project provides much secured architecture.

2.6 User Documentation:

The application will be having a user manual for helping and guiding the users on how to interact with system and perform various functions. The core components and its usage will be explained in detail.

2.7 Assumptions and Dependencies:

The users of this project assume that the ranking of the web pages was done by best rank tracer. The users of this project are dependent on spam identification techniques. The Server controls all other process and data transmissions.

3. System Features:

To provide a low-cost, scalable, location independent platform for managing clients’ data, current cloud storage systems adopt several new distributed file systems, for example, Apache Hadoop Distribution File System (HDFS), Google File System (GFS), Amazon S3 File System, Cloud Store etc. These file systems share some similar features: a single metadata server provides centralized management by a global namespace; files are split into blocks or chunks and stored on block servers; and the systems are comprised of interconnected clusters of block servers. Those features enable cloud service providers to store and process large amounts of data. However, it is crucial to offer an efficient verification on the integrity and availability of stored data for detecting faults and automatic recovery. Moreover, this verification is necessary to provide reliability by automatically maintaining multiple copies of data and automatically redeploying processing logic in the event of failures. Although existing schemes can make a false or true decision for data possession without downloading data at untrusted stores, they are not suitable for a distributed cloud storage environment since they were not originally constructed on interactive proof system.

4. External Interface Requirements

4.1 User Interface:

The user interface will be one of the standalone applications. User-friendly look and feel menus and screens are provided with easy access on a click of a button. The proposed system will provide a secured data transmission with privacy protection.

4.2 Hardware Interface:

No special hardware interface needed apart from standard personal computer. A computer that has enough hard disk space to board Operating System and enough processor speed to enable it to function normally is sufficient for developing the project.

4.3 Software Interfaces:

The application will be developed using Java Swing as front-end and Ms-Access as back end. The Operating system used will be Win 2000/xp.

5. Other Nonfunctional Requirements

5.1 Usability aspect:

A client should utilize the integrity check in the way of collaboration services.

The scheme should conceal the details of the storage to reduce the burden on clients.

5.2 Security aspect:

The scheme should provide adequate security features to resist some existing attacks, such as data leakage attack and tag forgery attack.

5.3 Performance aspect:

The scheme should have the lower communication and computation overheads

than non-cooperative solution.

5.4 Software Quality Attributes:

User-friendliness

The proposed system will be user-friendly, designed to be easy to use through simple interface. The software could be used by anyone with necessary computer knowledge. The software is created by an easy look and feel concept.

Reliability

The system will never crash and fail. But in case of system failure, the recovery could be done by using advance backup features.

Maintainability

All code shall be fully documented. Each function shall be commented with pre- and post-conditions. All program files shall include comments concerning date of last change. The code should be modular, to permit future modifications. Here for defects the system maintains its solution database.

Portability

The software can run either on Microsoft windows operating systems or Linux operating systems.

SYSTEM DESIGN:

System Architecture:

Block Diagram:

CIA Framework

Logger

Log Harmonizer

Log record generation

JAR files

Distributed Auditing

Certified Authority

Push mode

Pull mode

Cloud Service Provider

(CSP)

Data owner

Key generation

Log file Corruption

Error correction

Merged Log

Module Description:

Module 1: User Registeration

Cloud services are that users’ data are usually processed remotely in unknown machines that users do not own or operate. So users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, a novel highly decentralized Cloud Information Accountability framework is proposed to keep track of the actual usage of the users’ data in the cloud. Each user creates a pair of public and private keys based on Identity-Based Encryption. This IBE scheme is a Weil-pairing-based IBE scheme, which protects us against one of the most prevalent attacks.

Input : User data

Output: Public and private keys

Data owner

Registration

Public Key

Private Key

Module 2: Automated logging:

Using the generated key, the user will create a logger component which is a JAR file, to store its data items. The logger is the component in CIA which is strongly coupled with the user’s data, so that it is downloaded when the data are accessed, and is copied whenever the data are copied. It handles a particular instance or copy of the user’s data and is responsible for logging access to that instance or copy. Its main tasks include automatically logging access to data items that it contains, encrypting the log record using the public key of the content owner, and periodically sending them to the log harmonizer. The encryption of the log file prevents unauthorized changes to the file by attackers. The data owner could opt to reuse the same key pair for all JARs or create different key pairs for separate JARs. Using separate keys can enhance the security without introducing any overhead except in the initialization phase. It may also be configured to ensure that access and usage control policies associated with the data are honored. The logger requires only minimal support from the server (e.g., a valid Java virtual machine installed) in order to be deployed. The tight coupling between data and logger, results in a highly distributed logging system. The JAR file includes a set of simple access control rules specifying whether and how the cloud servers and possibly other data stakeholders (users, companies) are authorized to access the content itself. Then, he sends the JAR file to the cloud service provider that he subscribes to.

Input : Keys

Output: Log files

Data owner

Keys

Log files

Module 3: Authenticate CSP

To authenticate the CSP to the JAR, OpenSSLbased certificates are used, wherein a trusted certificate authority certifies the CSP. In the event that the access is requested by a user, SAML-based authentication is employed wherein a trusted identity provider issues certificates verifying the user’s identity based on his username. If the authentication succeeds, the service provider (or the user) will be allowed to access the data enclosed in the JAR.

Input : JAR files

Output: Authentication

JAR files

Authentication

Cloud Service Provider

Module 4: Distributed auditing

Distributed auditing mechanism includes the algorithms for data owners to query the logs regarding their data. To allow users to be timely and accurately informed about their data usage, the distributed logging mechanism is complemented by an innovative auditing mechanism. Two complementary auditing modes: 1) push mode; 2) pull mode. In push mode, the logs are periodically pushed to the data owner (or auditor) by the harmonizer. In pull mode allows auditors to retrieve the logs anytime when they want to check the recent access to their own data. This auditing mechanism has two main advantages. First, it guarantees a high level of availability of the logs. Second, the use of the harmonizer minimizes the amount of workload for human users in going through long log files sent by different copies of JAR files.

Input : JAR files

Output: Authentication

JAR files

Auditing

Pull mode

Push mode

Module 5: Handling log file corruption

Some error correction information will be sent to the log harmonizer to handle possible log file corruption. To ensure trustworthiness of the logs, each record is signed by the entity accessing the content. Further, individual records are hashed together to create a chain structure, able to quickly detect possible errors or missing records. The encrypted log files can later be decrypted and their integrity verified. They can be accessed by the data owner or other authorized stakeholders at any time for auditing purposes with the aid of the log harmonizer.

Input : Log files corruption

Output: Error correction

Log file corruption

Error correction

Data owner

UML Diagrams

Class Diagram

A class diagram is a type of static structure diagram that describes the structure of a system by showing the system's classes, their attributes, operations (or methods), and the relationships among the classes. The class diagram is the main building block of object oriented modeling. It is used both for general conceptual modeling of the systematic of the application, and for detailed modeling translating the models into programming code.

Activity Diagram

Activity diagrams are graphical representations of workflows of stepwise activities and actions with support for choice, iteration and concurrency. Activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control.

Use Case Diagram:

A use case diagram is a type of behavioral diagram defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the functionality provided by a system in terms of actors, their goals (represented as use cases), and any dependencies between those use cases. The main purpose of a use case diagram is to show what system functions are performed for which actor. Roles of the actors in the system can be depicted.

Sequence Diagram:

A sequence diagram is a kind of interaction diagram that shows how processes operate with one another and in what order. It is a construct of a Message Sequence Chart. A sequence diagram shows object interactions arranged in time sequence. It depicts the objects and classes involved in the scenario and the sequence of messages exchanged between the objects needed to carry out the functionality of the scenario. Sequence diagrams typically are associated with use case realizations in the Logical View of the system under development. The Sequence Diagram models the collaboration of objects based on a time sequence. It shows how the objects interact with others in a particular scenario of a use case.

CONCLUSION:

In this proposed system, Cloud Information Accountability (CIA) is proposed for provides end-to-end accountability in a highly distributed fashion. One of the main innovative features of the CIA framework lies in its ability of maintaining lightweight and powerful accountability that combines aspects of access control, usage control and authentication. Associated with the accountability feature, two distinct modes for auditing are also proposed. This innovative approach of merging automatic with an auditing mechanism allows the data owner to not only audit his content but also enforce strong back-end protection if needed. Moreover, one of the main features of this work is that it enables the data owner to audit even those copies of its data that were made without knowledge.

FUTURE WORK:

In the future, the verification the integrity of the JRE and the authentication of JARs will be refined. Also investigate whether it is possible to leverage the notion of a secure JVM being developed by IBM. In the long term, a comprehensive and more generic object-oriented approach to facilitate autonomous protection of traveling content will be designed. A variety of security policies, like indexing policies for text files, usage control for executables, and generic accountability and provenance controls will be supported in future.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now