Components Of A Distributed Database System

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

CHAPTER 1

Today’s business environment has an increasing need of distributed database and client/server application as the need for scalable, consistent, and accessible information is progressively growing. Distributed database system provides a improvement in communication and data processing due to its data distribution throughout the different network sites. Not only the data access is faster, but a single-point of failure is less likely to be occur, and it provides local control of data for users. However, there is a some complexity when attempting to manage and control distributed database systems.

1.2 Distributed Database Systems

A distributed database is a collection of databases that can store data at different computer network sites. Each database may includee different database management systems and different architectures that distribute the execution of transactions. The objective of a distributed database management system (DDBMS) is used to control the management of a distributed database (DDB) in such a way that it is appears to the user as a centralized database.

Providing the appearance of a centralized database system is one of the main objectives of a distributed database system. Such accomplion by using the following transparencies: Location Transparency, Performance Transparency, Copy Transparency, Transaction Transparency, Transaction Transparency, Fragment Transparency, Schema Change Transparency, and Local DBMS Transparency. There are eight transparencies are required to perform or incorporate the desired functions of a distributed database system.

Other goals of a successful distributed database include free object naming .In this "Free object naming means that it allows different users the ability to access the same object with different names, or different objects with the same internal name. Thus, giving the user complete freedom in naming the objects while sharing data without naming conflicts in distributed database.

Concurrency control is another issue in database systems. "Concurrency control is the activity to coordinate the concurrent accesses to a database in a multi-user database management system (DBMS)." There are a number of methods or schemes that provide concurrency control such as: Two phase locking method, Time stamping method, Multiversion timestamp method, and optimistic non-locking mechanisms. Some methods provide better concurrency control than others depending on the system.

1.3 Components of a Distributed Database System

This section, will explain the components of a distributed database system. One of the main components in a DDBMS is the Database Manager. "A Database Manager is software responsible for processing a segment of the distributed database. Another main component is the User Request Interface, which is usually a client program that acts as an interface to the Distributed Transaction Manager. "A Distributed Transaction Manager is a program that translates requests from the user and converts them into actionable requests for the database manager, which is typically distributed. A distributed database system is made of both the distributed transaction manager and the database manager.

1.4 Classification of Distributed Database Management System (DDBMS)

A collection of multiple, logically interrelated databases that distributed over a computer network. A distributed database management system is as the software system that permits the management of the distributed database and makes the distribution transparent to the users. A distributed database system contains a loosely coupled sites that share no physical component distributed database system allows a access of data from local and remote databases in several applications. In a homogenous distributed database system, each of the database is Oracle Database. In a heterogeneous distributed database system, in this at least one of the database is not a Oracle Database. Distributed databases can use client/server architecture to process information requests.

Homogenous Distributed Database Systems

Heterogeneous Distributed Database Systems

Client/Server Database Architecture

1.4.1. Homogenous Distributed Database Systems

The Homogenous distributed database system is a network of two or more Oracle Database that is reside on one or more machines In Figure-1 that illustrates a distributed system in which that connects three databases. An application can also simultaneously access or modify the data in several databases in a single distributed environment or distributed database.

For a client applications, the location and platform of the database are transparent to user. You can also create synonyms for remote objects in the distributed system so that users can easily access them with the same syntax as local objects.

SELECT * FROM<TABLE1>;

In that way, the distributed system gives the appearance of native data access to cureent user. Users on client side do not have to know that the data they access reside on remote databases.

Description of Figure 29-1 follows

Figure 1.4.1 Homogeneous Distributed Database

An Oracle Database distributed database system can be incorporate to Oracle Databases of different versions. All supported releases of Oracle Database can be participated in a distributed database system. Nevertheless, and the application that work with the distributed database must understand the functionality which is available at each node in the system. A distributed database application cannot expect an Oracle7 database is used to understand the SQL extensions that are only available with Oracle Database

1.4.2. Heterogeneous Distributed Database Systems

In a heterogeneous distributed database system,in this at least one of the database is a non-Oracle Database system. To the application, the heterogeneous distributed database system appears or seems as a single, local, Oracle Database. The local Oracle Database server hides the distribution and heterogeneity of the data.

The Oracle Database server can accesses the non-Oracle Database system by using Oracle Heterogeneous Services in conjunction or with the help of an agent. In non-Oracle Database, the data store using an Oracle Transparent Gateway and the agent is a system-specific application.

Alternatively, the generic connectivity is used to access non-Oracle Database data stores so long as the non-Oracle Database system supports the ODBC or OLE DB protocols.

1.4.3. Client/Server Database Architecture

A database server is an Oracle software that manages a database, and a client is an application that requests information from a server. Each computer in a entire network is a node that can be host one or more database. Each node in the distributed database system that can be act as a client, a server, or both, depending on the situation.

Description of Figure 29-2 follows

Figure 1.4.3 Oracle Database Distributed Database System

A client can connect to a database server directly or indirectly. A direct connection is occurs when a client connects to a server and accesses the information from a database contained on that server.

1.4.4 a) Advantages of DDBMS are as follows:

1. Data are used to locate near the greatest demand site. The data in a distributed database system are dispersed to match business requirements which reduce the cost of data access.

2. Faster data access. End users are often to work with only a locally stored subset of the company’s data.

3. Faster data processing. A distributed database system spreads out the system’s workload by processing data at several sites.

4. Growth facilitation. New sites can also be added to the network without affecting the operations of other sites.

5. Improved communications. Because local sites are smaller and located closer to customers, local sites foster better communication among departments and between customers and company staff.

6. Reduced operating costs. It is more cost-effective to add workstations to a network than to update a mainframe system. Development work is done more cheaply and more quickly on low-cost PCs than on mainframes.

7. User-friendly interface. PCs and workstations are usually equipped with an easy-to-use graphical user interface (GUI). The GUI simplifies training and use for end users.

8. Less danger of a single-point failure. When one of the computers fails, the workload is picked up by other workstations. Data are also distributed at multiple sites.

9. Processor independence. The end user is able to access any available copy of the data, and an end user's request is processed by any processor at the data location.

1.4.4 b) Disadvantages of DDBMS:

1. Complexity of management and control. Applications must recognize data location, and they must be able to stitch together data from various sites. Database administrators must have the ability to coordinate database activities to prevent database degradation due to data anomalies.

2. Technological difficulty. Data integrity, transaction management, concurrency control, security, backup, recovery, query optimization, access path selection, and so on, must all be addressed and resolved.

3. Security. The probability of security lapses increases when data are located at multiple sites. The responsibility of data management will be shared by different people at several sites.

4. Lack of standards. There are no standard communication protocols at the database level. (Although TCP/IP is the de facto standard at the network level, there is no standard at the application level.) For example, different database vendors employ different—and often incompatible—techniques to manage the distribution of data and processing in a DDBMS environment.

5. Increased storage and infrastructure requirements. Multiple copies of data are required at different sites, thus requiring additional disk storage space.

6. Increased training cost. Training costs are generally higher in a distributed model than they would be in a centralized model, sometimes even to the extent of offsetting operational and hardware savings.

7. Costs. Distributed databases require duplicated infrastructure to operate (physical location, environment, personnel, software, licensing, etc.)

1.4.4 c) Characteristics of Distributed Database Management Systems:

A DDBMS governs the storage and processing of logically related data over interconnected computer systems in which both data and processing functions are distributed among several sites. A DBMS must have at least the following functions to be classified as distributed:

• Application interface to interact with the end user, application programs, and other DBMSs within the distributed database.

• Validation to analyze data requests for syntax correctness.

• Transformation to decompose complex requests into atomic data request components.

• Query optimization to find the best access strategy. (Which database fragments must be accessed by the query, and how must data updates, if any, be synchronized?)

• Mapping to determine the data location of local and remote fragments.

• I/O interface to read or write data from or to permanent local storage.

• Formatting to prepare the data for presentation to the end user or to an application program.

• Security to provide data privacy at both local and remote databases.

• Backup and recovery to ensure the availability and recoverability of the database in case of a failure.

• DB administration features for the database administrator.

• Concurrency control to manage simultaneous data access and to ensure data consistency across database fragments in the DDBMS.

• Transaction management to ensure that the data moves from one consistent state to another. This activity includes the synchronization of local and remote transactions as well as transactions across multiple distributed segments.

Data is used at one location only (other than centralized).

Data accuracy, confidentiality, and security is a local responsibility.

Files are simple and used by only a few applications. In this case, there is no benefit to maintaining complex centralized software. Cost of updates is too high for a centralized storage system.

Data is used locally for decision-support. Queries against the database result in inverted lists or secondary key accesses. Such queries would degrade the performance of a centralized system. Fourth-generation languages used locally may require different data structures than the centralized systems.

Each database may involve different database management systems and different architectures that distribute the execution of transactions. Providing the appearance of a centralized database system is one of the many objectives of a distributed database system. Such an image is accomplished by using the following transparencies: Location Transparency, Performance Transparency, Copy Transparency, Transaction Transparency, Transaction Transparency, Fragment Transparency, Schema Change Transparency, and Local DBMS Transparency. These eight transparencies are believed to incorporate the desired functions of a distributed database system. Other goals of a successful distributed database include free object naming. "Free object naming means that it allows different users the ability to access the same object with different names, or different objects with the same internal name. Concurrency control is another issue among database systems. "Concurrency control is the activity of coordinating concurrent accesses to a database in a multi-user database management system (DBMS)." There are a number of methods that provide concurrency control such as: Two phase locking, Time stamping, Multiversion timestamp, and optimistic non-locking mechanisms. Some methods provide better concurrency control than others depending on the system

1.4.5 Security Issues in Distributed Database

Database security is the system, processes, and procedures that protect a database from unintended activity. Unintended activity can be categorized as authenticated misuse, malicious attacks or inadvertent mistakes made by authorized individuals or processes. Database security is also a specialty within the broader discipline of computer security. Traditionally databases have been protected from external connections by firewalls or routers on the network perimeter with the database environment existing on the internal network opposed to being located within a demilitarized zone. Additional network security devices that detect and alert on malicious database protocol traffic include network intrusion detection systems along with host-based intrusion detection systems.

graphics8

Databases provide many layers and types of information security, typically specified in the data dictionary, including:

Access control: Access control is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. An access control system, within the field of physical security, is generally seen as the second layer in the security of a physical

Auditing: Access control is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. An access control system, within the field of physical security, is generally seen as the second layer in the security.

Authentication: Authentication is the act of establishing or confirming something (or someone) as authentic, that is, that claims made by or about the subject are true

Encryption: In cryptography, encryption is the process of transforming information (referred to as plaintext) using an algorithm (called cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key Integrity.

1.4.6 Elements of Distributed Database Management System Security

1.4.6.1 General Database Security Concerns

The distributed database has all of the security concerns of a single-site database plus several additional problem areas. We begin our investigation with a review of the security elements common to all database systems and those issues specific to distributed systems. A secure database must satisfy the following requirements (subject to the specific priorities of the intended application):

1. It must have physical integrity (protection from data loss caused by power failures or natural disaster),

2. It must have logical integrity (protection of the logical structure of the database),

3. It must be available when needed,

4. The system must have an audit system,

5. It must have elemental integrity (accurate data),

6. Access must be controlled to some degree depending on the sensitivity of the data,

7. A system must be in place to authenticate the users of the system, and

8. Sensitive data must be protected from inference.

The following discussion focuses on requirements 5-8 above, since these security areas are directly affected by the choice of DBMS model. The key goal of these requirements is to ensure that data stored in the DBMS is protected from unauthorized observation or inference, unauthorized modification, and from inaccurate updates. This can be accomplished by using access controls, concurrency controls, update using the two-phase commit procedure (this avoids integrity problems resulting from physical failure of the database during a transaction), and inference reduction strategies (discussed in the next section).

The level of access restriction depends on the sensitivity of the data and the degree to which the developer adheres to the principal of least privilege (access limited to only those items required to carry out assigned tasks).

Typically, a lattice is maintained in the DBMS that stores the access privileges of individual users. When a user logs on, the interface obtains the specific privileges for the user.

According to Pfleeger, access permission may be predicated on the satisfaction of one or more of the following criteria:

(1) Availability of data: Unavailability of data is commonly caused by the locking of a particular data element by another subject, which forces the requesting subject to wait in a queue.

(2) Acceptability of access: Only authorized users may view and or modify the data. In a single level system, this is relatively easy to implement. If the user is unauthorized, the operating system does not allow system access. On a multilevel system, access control is considerably more difficult to implement, because the DBMS must enforce the discretionary access privileges of the user.

(3) Assurance of authenticity: This includes the restriction of access to normal working hours to help ensure that the registered user is genuine. It also includes a usage analysis which is used to determine if the current use is consistent with the needs of the registered user, thereby reducing the probability of a fishing expedition or an inference attack.

Concurrency controls help to ensure the integrity of the data. These controls regulate the manner in which the data is used when more than one user is using the same data element. These are particularly important in the effective management of a distributed system, because, in many cases, no single DBMS controls data access. If effective concurrency controls are not integrated into the distributed system, several problems can arise. Bell and Grisom identify three possible sources of concurrency problems:

(1) Lost update: A successful update was inadvertently erased by another user.

(2) Unsynchronized transactions that violate integrity constraints.

(3) Unrepeatable read: Data retrieved is inaccurate because it was obtained during an update. Each of these problems can be reduced or eliminated by implementing a suitable locking scheme (only one subject has access to a given entity for the duration of the lock) or a timestamp method (the subject with the earlier timestamp receives priority)

Special problems exist for a DBMS that has multilevel access. In a multilevel access system, users are restricted from having complete data access. Policies restricting user access to certain data elements may result from secrecy requirements, or they may result from adherence to the principal of least privilege (a user only has access to relevant information). Access policies for multilevel systems are typically referred to as either open or closed. In an open system, all the data is considered unclassified unless access to a particular data element is expressly forbidden. A closed system is just the opposite. In this case, access to all data is prohibited unless the user has specific access privileges.

Classification of data elements is not a simple task. This is due, in part, to conflicting goals. The first goal is to provide the database user with access to all non-sensitive data. The second goal is to protect sensitive data from unauthorized observation or inference. For example, the salaries for all of a given firm's employees may be considered non-sensitive as long as the employee's names are not associated with the salaries. Legitimate use can be made of this data. Summary statistics could be developed such as mean executive salary and mean salary by gender. Yet an inference could be made from this data. For example, it would be fairly easy to identify the salaries of the top executives.

Another problem is data security classification. There is no clear-cut way to classify data. Millen and Lunt demonstrate the complexity of the problem: They state that when classifying a data element, there are three dimensions:

1. The data may be classified.

2. The existence of the data may be classified.

3. The reason for classifying the data may be classified.

The first dimension is the easiest to handle. Access to a classified data item is simply denied. The other two dimensions require more thought and more creative strategies. For example, if an unauthorized user requests a data item whose existence is classified, how does the system respond? A poorly planned response would allow the user to make inferences about the data that would potentially compromise it. Protection from inference is one of the unsolved problems in secure multilevel database design. Pfleeger lists several inference protection strategies. These include data suppression, logging every move users make (in order to detect behavior that suggests an inference attack), and perturbation of data.

1.4.6.2 Security Problems Unique to Distributed Database Management Systems

a) Centralized or Decentralized Authorization

In developing a distributed database, one of the first questions to answer is where to grant system access. Bell and Grisom outline two strategies:

(1) Users are granted system access at their home site.

(2) Users are granted system access at the remote site.

The first case is easier to handle. It is no more difficult to implement than a centralized access strategy. Bell and Grisom point out that the success of this strategy depends on reliable communication between the different sites (the remote site must receive all of the necessary clearance information). Since many different sites can grant access, the probability of unauthorized access increases. Once one site has been compromised, the entire system is compromised. If each site maintains access control for all users, the impact of the compromise of a single site is reduced (provided that the intrusion is not the result of a stolen password).

The second strategy, while perhaps more secure, has several disadvantages. Probably the most glaring is the additional processing overhead required, particularly if the given operation requires the participation of several sites. Furthermore, the maintenance of replicated clearance tables is computationally expensive and more prone to error. Finally, the replication of passwords, even though they're encrypted, increases the risk of theft.

A third possibility offered by Woo and Lam centralizes the granting of access privileges at nodes called policy servers. These servers are arranged in a network. When a policy server receives a request for access, all members of the network determine whether to authorize the access of the user. Woo and Lam believe that separating the approval system from the application interface reduces the probability of compromise.

b) Integrity

According to Bell and Grisom, preservation of integrity is much more difficult in a heterogeneous distributed database than in a homogeneous one. The degree of central control dictates the level of difficulty with integrity constraints (integrity constraints enforce the rules of the individual organization). The homogeneous distributed database has strong central control and has identical DBMS schema. If the nodes in the distributed network are heterogeneous (the DBMS schema and the associated organizations are dissimilar), several problems can arise that will threaten the integrity of the distributed data. They list three problem areas:

1. Inconsistencies between local integrity constraints,

2. Difficulties in specifying global integrity constraints,

3. Inconsistencies between local and global constraints.

Bell and Grisom explain that local integrity constraints are bound to differ in a heterogeneous distributed database. The differences stem from differences in the individual organizations. These inconsistencies can cause problems, particularly with complex queries that rely on more than one database. Development of global integrity constraints can eliminate conflicts between individual databases. Yet these are not always easy to implement. Global integrity constraints on the other hand are separated from the individual organizations. It may not always be practical to change the organizational structure in order to make the distributed database consistent. Ultimately, this will lead to inconsistencies between local and global constraints. Conflict resolution depends on the level of central control. If there is strong global control, the global integrity constraints will take precedence. If central control is weak, local integrity constraints will.

1.5 Partial Security Concept

In distributed database, partial security is used to improve the performance of system. Systems that are partially secure allow potential security violations such as covert channel use at certain situations. We describe the basic idea of requirement specification that allows the system designer to specify important properties of the database at a suitable level. In many distributed applications, security is another important constraint, since the system maintains perceptive information to be shared by multiple users with different levels of security clearance. As more and more of such systems are in use, one cannot avoid the need for integrating them. It is important to define the exact meaning of partial security, for security violations of sensitive data must be strictly controlled. A security violation here indicates a potential covert channel, i.e., a transaction may be affected by a transaction at a higher security level. One approach is to define security in terms of a percentage of security violations allowed. However, the value of this definition is questionable. Even though a system may allow a very low percentage of security violations, this fact alone reveals nothing about the security of individual data. For example, a system might have a 99% security level, but the 1% of insecurity might allow the most sensitive piece of data to leak out. A more precise metric would be necessary for the applications where security is a serious concern. Thus in partial security the violation is allowed in certain security level.

In distributed database, partial security is used to improve the performance of system. Systems that are partially secure allow potential security violations such as covert channel use at certain situations.

We suggest the basic purpose of requirement specification that allows the system designer to specify important properties of the database at a suitable level. In many distributed applications, security is another important constraint, since the system maintains perceptive information to be shared by multiple users with different levels of security clearance. As more and more of such systems are in use, one cannot avoid the need for integrating them.

Figure 1.5 Depicts Full and Partial Security

Corporate Recruitment Management system is helpful for the job providers i.e. companies which are in need of employees, job seekers who are in need of job, (for both Experienced and fresher). This portals main aim is to provide the vacancies available for the job seekers without taking any charge from them in IT technologies.CRS will automatically send mails to all job seekers whose skills are matched with the requirement.

1.5.1 Features:

This project can be used very easily in the process of decision making in new recruitments.

Effective way of providing communication between job providers and job seekers.

Reliable and consistent way of searching jobs.

Conducting secured and restricted online exam for screened employees.

Sending Email notification to all job seekers.

Corporate recruitment System (CRS) is a part of the Human Resource Management System that structures and manages the entire recruitment process. This corporate recruitment service system will primarily focus on the posting and management of job vacancies. However, this will be the initial step towards achieving the long term goal of delivering broader services to support recruitment.

This will provide service to the potential job applicants to search for working opportunities and if they choose they may be able to make an application online. It is planned that ultimately all vacancies will be posted online and that this site will offer employers the facilities both to post their vacancies online and to review and manage the resulting applications efficiently through the web with the help of the CRS. CRS will allow job provider to establish one to one relationships with candidates, by keeping in close communications with them throughout the application, interview, and hiring process. It even allows the candidates to track the progress of their application.

1.6.2 Advantages of Corporate Recruitment System (CRS):-

Corporate Recruitment System (CRS) has all the features and functions required for executing a successful recruitment task, providing exceptional case of use for recruitment.

The Following are the overview of the features and benefits of CRS.

Database software installed and pre-configures for the immediate use of the system effectively and efficiently.

Pre-configured and ready to run Jobs database with management module for adding and deleting efficiently.

Database to store the candidate’s details securely.

Customizable authentication to control access to database files using assigned user login and password control.

Provides information to the managers so that they can make judgment about particular situations.

Reductions in the cost of hiring – there will be between 50-60 percent decrease in the cost of hiring.

Reduces the time required to complete the recruitment process of any organization.

CHAPTER 2

LITERATURE SURVEY – A Review

2.1 Background

As distributed networks become more popular, the need for improvement in distributed database management systems becomes even more important. A distributed system varies from a centralized system in one key respect: The data and often the control of the data are spread out over two or more geographically separate sites. Distributed database management systems are subject to many security threats additional to those present in a centralized database management system (DBMS). For the past several years the most prevalent database model has been relational. While the relational model has been particularly useful, its utility is reduced if the data does not fit into a relational table. Many organizations have data requirements that are more complex than can be handled with these data types. Multimedia data, graphics, and photographs are examples of these complex data types.

Many scholars do various different researchers in the field of distributed databases system (DDBMS) and related to security of the same. Some does work to propose new protocols to impose security restrictions, some veterans do research for the partial security constraints to save at least some part of the DDBMS systems.

Security impact in most of the distributed database tools became an emerging technology that has evolved in some way from distributed databases and discussions. These include data warehouses and data mining systems, collaborative computing systems, distributed object systems and the web. As more and more distributed database tools, the impact of secure distributed database systems on these tools will be a significant requirement.

There are number of issues regarding security. An increasing number of real-time applications like railway signaling control systems and medical electronics systems require high quality of security to assure confidentiality and integrity of information. Therefore, it is desirable and essential to fulfill security requirements in security-critical real-time systems.

To meet the needs of a wide variety of security requirements imposed by real-time systems, a group-based security service model is used in which the security services are partitioned into several groups depending on security types. While services within the same security group provide the identical type of security service, the services in the group can achieve different quality of security. Security services from a number of groups can be combined to deliver better quality of security.

The aim of a distributed database management system (DDBMS) is to process and communicate data in an efficient and cost-effective manner. It has been recognized that such distributed systems are vital for the efficient processing required in military as well as commercial applications. For many of these applications, it is especially important that the DDBMS should operate in a secure manner. For example, the DDBMS should allow users who are cleared at different levels access to the database consisting of data at a variety of sensitivity levels without compromising security.

Distributed database systems (DDBS) pose different problems such as:

1) Accessing distributed and replicated databases.

2) Access control and transaction management in DDBS.

3) Mechanism to monitor data retrieval and update to databases.

Sang H. Son, 1997 et al [1] Conflicts in database systems with both real-time and security requirements can sometimes be irresolvable. They attack this problem by allowing a database to have partial security in order to improve on real-time performance when necessary. By their definition, systems that are partially secure allow security violations between only certain levels. They present the ideas behind a specification language that allows database designers to specify important properties of their database at an appropriate level. In order to help the designers, they developed a tool that scans a database specification and finds all irresolvable conflicts. Once the conflicts are located, the tool takes the database designer through an interactive process to generate rules for the database to follow during execution when these conflicts arise. They briefly describe the BeeHive distributed database system, and discuss how their approach can fit into the BeeHive architecture. Their proposed solution to this problem of conflicting requirements involves dynamically keeping track of both the real-time and the security aspects of the system performance. When the system is performing well and making a high percentage of its deadlines, conflicts that arise between security and real-time requirements will tend to be resolved in favor of the security requirements more often, and more priority inversions will occur. However, the opposite is true when the real-time performance of the system starts to degrade. Then, the scheduler will attempt to eliminate priority inversions, even if it means allowing an occasional covert channel. Semantic information about the system is necessary when making these decisions. This information could be specified before the database became operational using a specification language. In this language, users would be able to express the relative importance of keeping information secure and meeting deadlines. Specifications in this language could then be "compiled" by a pre-processing tool. After a successful compilation, the system should be deterministic in the sense that an action must be clear for every possible conflict that could arise. This action might depend on the current level of real-time performance or other aspects of the system. Any ambiguities would be caught at compile time, causing the compilation to be unsuccessful. The compilation of the specification produces output that can be understood and used by the database system. The problem of accomplishing the union of security and real-time requirements becomes more complicated in a distributed environment. In a distributed environment, having a single entity keeps track of system performance in terms of timeliness and security for the entire global database could be impractical for a number of reasons. Requiring transactions to report to this performance monitor after every execution could put more load on the network and have a negative impact on performance. The node that contained the performance monitor would be a "hotspot" and might introduce a performance bottleneck. These problems would be serious as the system got bigger, so such a solution would have a limited scalability. This brings up an interesting question: Is it better to have many performance monitors, each responsible for a small part of the database, or to have fewer of them, each with a larger domain? In other words, what granularity of the system should the performance monitors be responsible for? In their approach, there is a performance monitor responsible for every node. One of the issues to be addressed in a system with multiple performance monitors is how to optimize the database globally with only local knowledge. In their approach, this is accomplished through communication between performance monitors at each node.

Moses Garuba et al [2] multilevel secure database management system (MLS/DBMS) products no longer enjoy direct commercial-off-the-shelf (COTS) support. Meanwhile, existing users of these MLS/DBMS products continue to rely on them to satisfy their multilevel security requirements. This calls for a new approach to developing MLS/DBMS systems, one that relies on adapting the features of existing COTS database products rather than depending on the traditional custom design products to provide continuing MLS support. They took fragmentation as a good basis for implementing multilevel security in the new approach because it is well supported in some current COTS database management systems. They implemented a prototype that utilizes the inherent advantages of the distribution scheme in distributed databases for controlling access to single-level fragments; this is achieved by augmenting the distribution module of the host distributed DBMS with MLS code such that the clearance of the user making a request is always compared to the classification of the node containing the fragments referenced; requests to unauthorized nodes are simply dropped. The prototype we implemented was used to instrument a series of experiments to determine the relative performance of the tuple, attribute, and element level fragmentation schemes. Their experiments measured the impact on the front-end and the network when various properties of each scheme, such as the number of tuples, attributes, security levels, and the page size, were varied for a Selection and Join query. They were particularly interested in the relationship between performance degradation and changes in the quantity of these properties. The performance of each scheme was measured in terms of its response time. The response times for the element level fragmentation scheme increased as the numbers of tuples, attributes, security levels, and the page size were increased, more significantly so than when the number of tuples and attributes were increased. The response times for the attribute level fragmentation scheme were the fastest, suggesting that the performance of the attribute level scheme is superior to the tuple and element level fragmentation schemes. In the context of assurance, their research has also shown that the distribution of fragments based on security level is a more natural approach to implementing security in MLS/DBMS systems, because a multilevel database is analogous to a distributed database based on security level. Overall, our study finds that the attribute level fragmentation scheme demonstrates better performance than the tuple and element level schemes. The response times (and hence the performance) of the element level fragmentation scheme exhibited the worst performance degradation compared to the tuple and attribute level schemes.

Steven P. Coy et al [3] Security concerns must be addressed when developing a distributed database. When choosing between the object oriented model and the relational model, many factors should be considered. The most important of these factors are single level and multilevel access controls, protection against inference, and maintenance of integrity. When determining which distributed database model will be more secure for a particular application, the decision should not be made purely on the basis of available security features. One should also question the efficacy and efficiency of the delivery of these features. Do the features provided by the database model provide adequate security for the intended application? Does the implementation of the security controls add an unacceptable amount of computational overhead? In this paper, the security strengths and weaknesses of both database models and the special problems found in the distributed environment are discussed. They have seen that the choice of database model significantly affects the implementation of database system security. Each model has strengths and weaknesses. It is clear that more research has been completed for securing centralized databases. Sound security procedures exist for the centralized versions of both models. Both have procedures available that protect the secrecy, integrity, and availability of the database. For example, multilevel relational DBMS use views created at the system level to protect the data from unauthorized access. OODBMS, on the other hand, protect multilevel data at the object level through subject authorization and limitation of access to the object’s methods. The principle unsolved problem in centralized databases is inference. The current strategies do not prevent all forms of inference and those suggested by Thuraisingham and Ford are computationally cumbersome. Given that both models have well-developed security procedures, the choice of DBMS model in a centralized system could be made independent of the security issue. The same cannot be said of distributed databases. The relational model currently has a clear edge in maintaining security in the distributed environment. The main reason for the disparity between the two models is the relative immaturity of the distributed object-oriented database. The relational model, however is not without problems: The processing of global views in a heterogeneous environment takes too long, and the enforcement of database integrity in a heterogeneous environment is problematic because of the conflicts between local and global integrity constraints. The lack of completely compatible, vendor-independent standards for the distributed OODBMS relegates this model to a promised, yet not completely delivered, technology. If the distributed environment is homogeneous, the implementation of subject authorization should be possible. For the heterogeneous distributed OODBMS, however, the absence of universally accepted standards will continue to hamper security efforts.

Ghazi Alkhatib et al [4] Distributed database systems (DDBS) pose different problems when accessing distributed and replicated databases. Particularly, access control and transaction management in DDBS require different mechanism to monitor data retrieval and update to databases. Current trends in multi-tier client/server networks make DDBS an appropriated solution to provide access to and control over localized databases. Oracle, as a leading Database Management System (DBMS) vendor employs the two-phase commit technique to maintain consistent state for the database. The objective of their work is to explain transaction management in DDBS and how Oracle implements this technique. They post an example is given to demonstrate the step involved in executing the two-phase commit. By using this feature of Oracle, organizations will benefit from the use of DDBS to successfully manage the enterprise data resource. The Two-Phase Commit Protocol (2CP) has two types of node to complete its processes: the coordinator and the subordinate, (Mohan et al., 1986). The coordinator’s process is attached to the user application, and communication links are established between the subordinates and the coordinator. The two-Phase Commit protocol goes through, as its name suggests, two phases. The first phase is a PREPARE phase, whereby the coordinator of the transaction sends a PREPARE message. The second phase is decision-making phase, where the coordinator issues a COMMIT message, if all the nodes can carry out the transaction, or an abort message, if at least one subordinate node cannot carry out the required transaction. (Capitalization is used to distinguish between technical and literal meanings of some terminologies). The 2PC may be carried out with one of the following methods: Centralized 2PC, Linear 2PC, and Distributed 2PC.

Bhavani Thuraisingham et al [5] they assume that the security level of the schema of a relation is the security level of the user who creates the schema. For every security level L that dominates the security level of the schema of a relation R, there could be tuples of R at level L. Therefore, corresponding to an unclassified schema of relation R, there could be tuples of R at the unclassified, confidential, secret, and top secret levels. Since a multilevel relation can be decomposed into single-level relations, corresponding to every security level L that dominates the security level of the schema of a relation R, there could be fragments of R at level L. A user can read any tuple whose security level is dominated by his level. When a user enters a tuple, the tuple is assigned the level of the user and is stored in a fragment of the relation at the security level of the user. If the user’s security level is dominated by the security level of the schema of the relation, then the tuple is not entered.

The above mentioned survey of many great researchers gave us a idea how critical is the security concern is all about in the field of distributed databases system (DDBMS). Some researchers do their work in partial security field, some veterans suggests their two phase commit protocol, some gave their valuable idea to enhancement security at schema level, some gave the reasons for imposing security in OODBMS, i.e to protect multilevel data at the object level through subject authorization and limitation of access to the object’s methods, some by their definition, suggests that systems that are partially secure allow security violations between only certain levels, they present the ideas behind a specification language that allows database designers to specify important properties of their database at an appropriate level.

CHAPTER 3

PROPOSED METHODOLOGY

3.1 Proposed System

Recruitment System (RS) is a part of the Human Resource Management System that structures and manages the entire recruitment process. This recruitment service system will primarily focus on the posting and management of job vacancies. However, this will be the initial step towards achieving the long term goal of delivering broader services to support recruitment.

Figure 3.1 Framework of the proposed system

The below mentioned discussion depicts the overall security breaches in different levels of the layered framework displayed above.

3.1.1 At Layer-1 Admin/super user’s level:-

This layer is in fully protected mode; here all the permission and validations are imposed at different levels. Security is not compromised at this very level. The security is very high as compared to other levels.

3.1.2 At Layer-2 Job Recruiter level:-

This layer is in partially protected mode; here all the permission and validations are imposed in such way that only few of the information areas can be accessed at lower levels up to some extent. For example if a job recruiter wants to share questionnaires from the other job recruiter he can violate security up to some limit.

3.1.3 At Layer -3 Job Seekers level:-

This layer is also in partially protected mode; here all the permission and validations are imposed in such way that only few of the information areas can be accessed from upper level up to limit/range at 1 or from upper level to limit/range at 2 . For example if a job seeker wants to know the list of other jobs according to his skill match or wants to know the results of the online tests and the job recruiter is not responding then in that case he might be involve in a security breach but that is permissible one, he in this case can get his desired information from the super level.

The Following are the overview of the benefits of RS.

1. Database software installed and pre-configures for the immediate use of the system effectively and efficiently.

2. Pre-configured and ready to run Jobs database with management module for adding and deleting efficiently.

3. Database to store the candidate’s details securely.

4. Customizable authentication to control access to database files using assigned user login and password control.

5. Provides information to the managers so that they can make judgment about particular situations.

6. Helps to provide control access to database files using assigned user login and password control.

3.2 Software Requirements

Operating System : Windows XP

Database : Sql Server

Server side technology : ASP.Net

Server side scripting : ASP

Client side scripting : HTML

Web-Server : IIS

3.3 Microsoft.NET Framework

The .NET Framework is a new computing platform that is used to simplifies application development in the highly distributed environment of the Internet. The .NET Framework is designed to fulfill the following objectives.

To provide a consistent object-oriented programming environment whether object code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.

To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code

3.4 Flow Diagrams

3.4.1Over all Use Case

3.4.2 Administrator Use Case

3.4.3 Recruiter Use Case

3.4.4 Job Seeker Use Case

3.6 Objective:-

This system provides service to the potential job applicants to search for working opportunities.

This system helps the HR Personal in the recruitment of new candidates to the company.

Corporate Recruitment System will allow job provider to establish one to one relationships with candidates.

This corporate recruitment service system will primarily focus on the posting and management of job vacancies.

This system is designed such that ultimately all vacancies will be posted online and would offer employers the facilities to post their vacancies online.

It helps to review and manage the resulting applications efficiently through the web.

It even allows the candidates to track the progress of their application

CHAPTER 4



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now