The Sql Server Password Policy

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

Organisations are using the database systems to maintain daily activities and transactions. The security of the database becomes the most important issue to be addressed. The database is most vulnerable to be misused and damaged by either external threat or internal threats. According to Bertino and Sandhu (2005),

"Security breaches can be typically categorised as unauthorised data observation, incorrect data modification and data unavailability".

The database administrator (DBA) has to put in maximum efforts for protecting the physical integrity of databases, especially the recordings against sabotage. A simple and basic method to accomplish that is by taking regular backups. The integrity of each and every database element will presume that the value of each field may be built or modified by authorized users only, that too if the input values are correct. The access control is being done taking into consideration the restrictions of the database administrator. Database management system (DBMS) will apply the security policy of the DBA which is to meet the below requirements: (Burtescu, 2008)

Server security: involves limiting access to data stored on the server. It is the most important option that has to be taken in consideration and planned carefully.

Connections to the database: Using the ODBC will be followed by checking that each connection corresponds to a single user who has access to data.

Access control table: is the most common form of securing a database. An appropriate use of the table access control involves a close collaboration between the administrator and the base developer.

Restriction tables: will include lists of unsure subjects who could open set off sessions.

Project Rationale

The project is about the comparison of various database tools and techniques used in different DBMS in the market. For the purpose of research, three types of DBMS are being considered – Oracle, IBM DB2 and Microsoft SQL Server as they are the most preferred database management systems in the present market. These techniques are used to secure the central database system and protect the data from unauthorised access. It involves the security concepts, approaches and using different tools and techniques to ensure the database security. DBAs setup various user accounts, passwords, and privileges.

Aim

The main aim of the research is to explore the different tools and techniques to handle database security and the data integrity across different DBMS (Database Management Systems).

Objectives

To ensure the user security by user id and password management

To assign roles and privileges to users

To create triggers, auditing, tracing and tracking

To manage data integrity constraints

To create profiles and system resource allocation

To investigate and apply vendor-specific security techniques

Research Methodology

The project includes the analytic stage, synthesis stage and the critical appraisal stage. One key technique for analysis is the literature review which is the systematic review of the current knowledge on the dissertation topic. Observation is also used partly for the research as the project involves programming on the techniques for the database security. As a part of resources for the research, secondary data is used. The resources used are various books (print and ebooks), journals, articles, papers published in conferences and other trusted resources on the internet.

Literature Review

The DBMS interfaces with application programs, and the data stored in the database is used by several applications and different users pertaining to these applications. The database system allows these users to access and manipulate the data contained in the database in a suitable and efficient manner. Every organization chooses the database management system according to their need and requirement.

The most important concern for any company is to ensure the security of its databases which is indeed a complex issue. The security measures tend to be complex depending on the complexity of the databases. Security measures form an integral part of database even from the initial phase which includes the inception as well as the design phase. Modern techniques used to monitor the security of databases manage the security and protection fortifications at different levels: host, physical, applications, network and data (Jangra et. al, 2010).

Accroding to Pernul (1994), database security is concerned with ensuring the secrecy, integrity, and availability of data stored in a database. Secrecy must deal with the possibility that information may also be disclosed by legitimated users acting as an ‘information channel’ by passing secret information to unauthorized users. This may be done intentionally or without knowledge of the authorized user. Integrity requires data to be protected from malicious or accidental modification, including the insertion of false data, the contamination of data, and the destruction of data. Integrity constraints are rules that define the correct states of a database and thus can protect the correctness of the database during operation.

An organisation, while implementing database systems, has to mainly consider the data security. The security factors as stated by Jangra et, al. (2010), are:

Authentication

Authorisation

Encryption

Views and Triggers

Privilege management

Authentication

Database authentication is the process of creating a user and defining his identity to a database server. The users have to provide their login credentials and validate that they have rights to access the server. In the authentication stage, the factors that are determined include specific rights to read or update tables, execute procedures and queries and carry out the structural changes to the database. There are different ways to connect to database which depends on applications, user requests and security requirements.

Password Creation

The most commonly used methods for building passwords are the text based passwords. There are other methods for authentication such as the public key cryptography, graphical passwords, encrypted images, finger prints, retinal scans and many others. The text based passwords are more prone to security breaches because maximum users tend to select the easiest available words hence making it weak and less secure. This emphasises the need of regulations to password creation (Komanduri, et. al., 2011).

Password Policy

The password policy forms the basic requirement for password authentication. Shay et. al. (2010) define password policy as a set of rules defined by the administrators and the companies to fight the weaknesses of both innate and user created text based passwords, to which the users have to compulsorily follow while choosing a password. These policies do not always lead to secure passwords as they are limited by user behaviour. According to Komanduri, et. al. (2011) the user’s approach to receive the rigid policies often leads to impatience and thus results in weak passwords prompted by users.

The research compares the different policies defined by Oracle, IBM and Microsoft which is discussed below.

Oracle Password Policy

Password length is restricted to 30 characters

Password is preferred between 12 and 30 characters and numbers.

Password must contain at least one digit, one upper case character, and one lower-case character.

Password with combination of letters in both cases, numbers and special characters is recommended

Usage of database character set, which includes the underscore (_), dollar ($), and number sign (#) characters.

Password expiration is generally set to 120 days

Account will automatically lockout after the user has 10 login failures

Password expiry warning given to user seven days before expiry

Five grace logins after password expires

Oracle (2012), as a part of security, provides

"...Routine for password complexity verification through the PL/SQL script UTLPWDMG.SQL that can be executed to check whether the passwords are sufficiently complex".

A set of predefined, default user accounts are provided when Oracle Database is installed. Security is very easy to be broken through default database user account though it has as password provided after installation. An instance being the user accounts SCOTT, which is a vulnerable to intruders. Default accounts are installed locked when the passwords expire in Oracle Database 11g Release 2 (11.2).

IBM DB2 Password Policy

Passwords are case sensitive

Minimum length of password is 8 characters

Alpha-numeric characters are supported

Password expiration period must be set to 300 days

Password with combination of letters in both cases, numbers and special characters is recommended

At least two numeric and two special characters must be used

Assigning previous passwords are not allowed

Minimum two characters from previous password must exist in the new password

Account will automatically lockout after the user has 10 login failures

IBM Tivoli directory server uses three password policies (IBM, n. d.)

Group Password policy – is group specific policy. It has one single password policy for the group. Multiple group policies also occur because a single user may belong to multiple groups

Individual Password policy – is user specific policy. It allows each user to have his own specifications in the password attributes

Global Password policy – is created by the server and the attribute ibm-pwdPolicy is set to FALSE. The other policies will be ignored by the server. If the policy has to be applied on the server, the attribute has to be set to TRUE.

SQL Server Password Policy

Minimum length of password is 8 characters

Password with combination of letters in both cases, numbers and special characters is recommended

Password should not contain all or part of the user id. The user id part is defined as three or more consecutive characters which are alphanumeric. They are delimited on both ends using characters such as space, tab, and return, or special characters like comma (,), period (.), hyphen (-), underscore (_), or number sign (#).

It is recommended that the passwords are long and complex

Passwords are periodically expired. Password expiry warning given to user seven days before expiry

Login ids with expired passwords are disabled

SQL Server suggests the characteristics of a strong password as listed below (MSDN, n. d.)

It should be minimum 8 characters long

It should not be available in the dictionary

It should be changed regularly

It should be a combination of numbers, alphabets and special characters

It should not be the user id or any person’s name

It should not be a command or a computer name

It should be different from old or previous passwords

Biometric Authentication

Biometric authentication methods use the user’s actual physiological or behavioural features for authentication. The advantage with biometric techniques is they cannot be lost or forgotten. This helps users as well as system administrators to manage and avoid the process of reissuing or temporarily issuing passwords. Matyas and Riha (2010) conclude in their research that the biometric authentication is an excellent supplementary authentication technique. Even simple biometric solutions enhance the overall system security when used with traditional authentication methods.

Unlike the text-based passwords which require a perfect match of two password strings, a biometric-based authentication system functions based on the match of two biometric samples (Jain and Nandakumar, 2012). Biometric systems identify users based on their anatomical features like fingerprint, facial recognition, retina and voice. These features are physically linked to the user and it makes biometric recognition mechanism more reliable in ensuring that only authorized users are able to access the system.

Sul (2011) proposes a fingerprint classification algorithm which states that the fingerprint samples are stored in a distributed method. The biometric system initially records the fingerprints of the user using an appropriate sensor and stores them. This is called enrolment. The pre-processed images are separated into many blocks and the extracted features of the blocks are used to categorize the image as arch, whirl, loops etc. These features are later stored as a template and will be used for authenticating the user identity along with text based passwords.

Face recognition is carried out as local binary pattern (LBP). Texture analysis and motion analysis are important when the image is retrieved for authentication (Darwish, 2010). While authenticating a user’s face, the system would come up with two types of errors, False-negative – which means that the system has rejected the actual user and False-positive – which means that the system has accepted a fraud. The attempts on minimising these false-positive errors are still in process.

Authorisation

Authorisation is the process where the user is given permissions to access a particular data store. Once a request is placed by a user for the access to a data store, the request is validated with the access rights assigned to that user id from the database. If the requested resource is assigned to the user id, the user request is allowed to execute or else, the query will either be terminated or has to be altered based on the set of flexible transformation rules (Eavis and Altamimi, 2012).

Authorisation policies / rules

Authorisation Rules allow or reject access to certain objects (requests) by describing the subject (user) to which the rules apply to, the object (request) which the authorisation refers to, the action which the rules refer to and the warning explaining whether the rule allows or rejects subject (user) access. Authorisation rules generally include details of subjects, objects, privileges, security information, log types, conditions, etc (Blanco et. al, 2009).

Oracle Authorisation Policies

Oracle Identity Manager is in charge of the user access to different procedures in the application. An authorisation engine is embedded in the Identity Manager and it manages the user access with the help of pre-defined authorisation policies. The authorisation policies decide during runtime whether a particular action should be allowed or not. The authorisation policies are defined such that they satisfy the authorisation requirements specified by Identity Manager (Oracle, 2011).

The components of Oracle Identity manager are:

Role management

User management

Authenticated self-service user management

The important components of Oracle authorisation policy are

Identifying details – name and description must be defined

Oracle identity manager feature – these are components of Identity manager like user management and role management. Each feature has its own authorisation policy.

Assignee – is the role to which the privileges are granted by the authorisation policy.

Privileges – are assigned to the assignee. They are identified by the feature for which this authorisation policy is defined.

Data security – defined in terms of entities selection criteria which are used to establish entities for which privilege has to be granted.

DB2 Authorisation Policies

The factors to be decided before creating an authorisation policy are (IBM, n. d.),

Services – are the resources protected by Security manager. The services have to be attached to an authorisation policy in order to be secured by Tivoli Security policy manager. The three methods of attaching a policy are direct attachment through nodes, through inheritance, and through classification.

Application roles – are the categories of user as a general user and authenticated user. Based on these categories, application role identifies the user groups to apply the policies.

Rules – are the conditions applied on the access rights for a specific user.

The components of DB2 authorisation policy are

Policy decision point – evaluates a user request and decides as to accept or reject the request.

Policy enforcement point – receives the decision from above and enforces the same, i.e., either allows the access or denies the access.

Policy distribution target – is the place from where the policy decision points receive the authorised policies from the security policy manager.

SQL Server Authorisation Policies

SQL Server uses the role-based access control. To regulate the access control, authorisation policies are built and stored in the Active directory in the form of authorisation stores. They are applied during run-time and validate with the policy information in the authorisation stores. The components of authorisation policy are (Microsoft, 2012),

Policy stores, Applications and stores – Policy stores contain definition and is initialised by an application before using it for access control.

Users and groups – include users and user groups

Operations and tasks – task contains one or more operations which are activated at run-time. The task contains the role definition also.

Roles – is a group of operations or tasks depending on the category of user’s requests.

Business rules – when an application validates the access control at run-time, it refers to the business rules script.

Collections

Lightweight Directory Access Protocol (LDAP)

Lightweight Directory Access Protocol (LDAP) is a client-server protocol which works on TCP/IP for the purpose of data access and data management on the directory. LDAP stores the user information such as the user login id, roles, privileges and user groups. LDAP ensures the easy availability and efficient management of the user data (Li, Wang and Deng, 2010).

LDAP directory is a hierarchical tree structure depicting the network of users based on the roles and privileges. The components of the directory are (Salim et. al, 2009),

Servers – facilitates the direct data storage locally. It allows the access to the external sources. SLAPD (Stand-Alone LDAP Daemon) is the server in LDAP suite. The server supports changes to the directory data (adding, deleting or altering).

Clients – access servers over LDAP network protocol. They perform by prompting that the server executes requests on behalf of the clients. Firstly, a client connects to the directory server, the next step being authentication. Finally they execute zero or more requests before disconnecting.

Utilities – control data at a lower level and do not require the intervention of server. They are mainly used as additional features to maintain the server.

Libraries – LDAP applications are able to access the LDAP functions through the libraries. The rest of the directory components share access to such libraries.

Access Control Model

A key attribute of an access control model is the ability to validate the security of its configurations and policies. The different access control models are (Tidswell and Potter, 2001),

Discretionary access control (DAC) – is totally under the user control. The authorised user can grant or revoke the access control objects. DAC has its weakness as it fails to recognise the difference between the users and the system program which might lead to security breaches.

Mandatory access control (MAC) – based on security labels which maps the data user and the data item. The data item’s label is called security classification and the data user’s label is called security clearance. Only the DBA can change the access rights. Based on the labels, the access policy determines the user access to any system programs or applications.

Role-based access control (RBAC) – is the process of monitoring the access rights based on the roles of users and user groups. RBAC fills in the gap between DAC and MAC. It is easier to reassign users to different roles from the existing ones.

Encryption

The user authentication and access control models are useful to secure a database. But they are not sufficient to ensure the database security, especially when the data files are in readable form. In such cases DBA has to opt for advanced security measures like database encryption. Database encryption provides data protection and ensures that only authorised users are allowed the access to the data files. It also secures data backups in case of failure, theft or other compromise of backup medium (Shmueli et. al, 2010).

According to Liu and Gai (2008), the basic requirements for a database system to be efficiently encrypted are

Sensitive data though not used often need to be encrypted. This prevents the misuse of the database in case of theft.

The process of encryption or decryption should be understandable to the application.

The storage level has to be monitored such that it does not increase much. This is because the encrypted data is larger than the original data.

The performance of the queries executed on the encrypted data has to be constantly monitored.

Safe key management – Data encryption is always encountered with safety of key generation, key distribution, key destruction and key sharing.

Flexible encryption granularity

Encryption Granularity

According to Bouganim and Guo (2009), database encryption refers to,

"The use of encryption techniques to transform a plain text database into a (partially) encrypted database, thus making it unreadable to anyone except those who possess the knowledge of the encryption key(s)"

Based on the complexity of the database system, the database encryption granularity can take place in three levels – file-system level, Database level and application level. The selection of the encryption granularity depends on the requirements of the application.

File-system encryption

In this level, the complete file system is encrypted. The entire data entering the file system will be encrypted and on retrieval of data from the file system, the decryption process takes place. This method is dependent on the file system and do not touch the database objects and structures. The application has an option of encrypting a part of file system or the whole file system (Seifert and Yang, 2008).

Database-level encryption

In this level, the entire database is an encrypted object. This implies that process of encryption takes place on all the user tables, system tables, views, triggers, indexes, procedures, etc. Therefore the process requires only one key which makes the key management easier. In this level of encryption, even if the user makes a simple query involving a smaller data, the entire database has to be decrypted at run time. This process affects the system performance hugely because the use of indexes on the encrypted data is not allowed (He and Guan, 2010).

Application-level encryption

In this level, the encryption and decryption process takes place within the application. The data once entered in the application is encrypted and stored. When the data has to be retrieved, the decryption takes place within the application. This level of encryption has an advantage as separate keys are used and the keys will not move out of the application. But the possibilities of security breach lies when the application tries to retrieve a larger data than the space allocated to the user (Bouganim and Guo, 2009).

Keys: Vaults, Manifests and Managers

A key is a variable value used in cryptographic algorithms and is used to either scramble (encrypt) or unscramble (decrypt) the cipher text. The three components pertaining to keys are key vault, key manifest and key manager (Kenan, 2006).

Key Vault

Key vault stores the key in highly protected environment. The access to the part of vault holding the keys are with security officers and cryptographic engine. A vault can store keys for multiple engines but keys are assigned to one engine only. There are two types of key vaults (Kenan, 2006, Chapter 5).

Key servers – is a vault hosted in its own hardware and linked to the network. When the local engine needs the key, it sends a request to the network of the key server for the correct key. The key server authenticates and authorises the requestor. If the requestor is valid, it delivers the key.

Key stores – provide key access and management interfaces. The key access interface allows the engines to request for keys. The management interface provides the access for administrators to create, delete and configure keys.

The key vault can be considered as a separate component which includes the actual keys. The vault has its own designated key credentials, which holds the following information – encryption algorithm, mode of operation and processing requests, etc. In the file database encryption system, key vault provides the following services – transmit key to password engine, take out the existing keys, loading the new keys, delete existing keys and set new key attributes, etc. Key vault does not control the execution of requests. They are accomplished by key manifest (Pan, 2011).

Key Manifest

Key manifest is the component provided in order to record keys and engine function for the cryptographic provider. The manifest provides an abstraction layer so that a provider can use multiple key vaults even if the key IDs in different vaults are similar. A manifest contains the information of Alias ID, Key alias, Key status, Key activation date, Key family, Engine and Key ID. The key ID is used to uniquely identify a key within a particular vault. The key alias is just a name for an alias. It does not provide any functional purpose. Key state is a function status and activation date. The five states are (Kenan, 2006),

Pending

Live

Expired

Retired

Terminated

Key Manager

A key manager is used when the administrator wants to create, delete or alter keys. The key manager interacts with key vault and key manifest. Any change occurring on the keys should be updated in both vault and manifest and key manager is responsible for this action (Pan, 2011).

Encryption in Applications

Most DBMS vendors provide native encryption features that enable application developers to include additional measures of data security through selective encryption of stored data. Such native features take the form of encryption toolkits or packages (Mattsson, 2005).

Transparent data encryption (TDE) is being used in the three DBMS – Oracle, DB2 and SQL Server. The entire database is secured using a single key. TDE performs all the cryptographic activities at the I/O stage, but within the database system, and removes any requirement from application developers to create custom code to encrypt and decrypt data. Encryption keys are managed by a Hardware security module (HSM) or stored in an external file called wallet which is encrypted using an administrative password (Bouganim and Guo, 2009).

Views and Triggers

Views

A database view is used to restrict the selection of data from the large amount of records in the tables under consideration. A view is used to display selected database fields or entire table. Views can be sorted to organize the order of records and grouped into sets for the display of records. They have other options such as totals and subtotals. User interaction with the database is carried out using the database views. Properly selected set of views is one of the keys to create a useful database. All views must have a view definition query to tell the database which tables, columns and rows are going to make up the new view. Views can be built from other views. The ‘data hiding’ abilities of views provide yet another tool in our security toolkit (BCU Moodle, 2012).

Materialised views

Materialised views also known as indexed views, are used to improve the performance of complex queries. To receive effective outputs from materialised views, the optimizer has to restructure the queries against base tables into equivalent queries that use multiple materialised views. The data stored in the view must be in sync with the data in the base tables upon which it depends, for the restructured queries to execute correctly (DeHaan et. al., 2005).

A materialized view can be read-only, updatable, or writeable. The read-only views do not allow users to perform data manipulation language (DML) statements, whereas DML statements can be performed on updatable and writeable materialized views. The materialised views are used to complete the purposes mention below: (Oracle, 2008)

Ease network loads – instead of using a single database server, the load can be distributed into multiple database servers. Multitier materialised views are used to create views based on other materialised views which help in load distribution effectively as the users can access materialised view sites instead of directly accessing the master sites. Data replication using materialised views increases access to data as it provides local access to the target data.

Create a mass deployment environment – the deployment environment parameters enables the creation of dataset for individual users without changing the deployment template

Enable data subsetting – the views allow the replication of data on column-level and row level basis.

Enable disconnected computing – materialised views do not require dedicated network connection.

Restrictions on views

Though the views are similar to tables and can include all or certain fields of the base tables, there are a few restrictions on the views (IBM, 2012),

The views cannot be assigned an index.

A key or a constraint cannot be created on a view.

ORDER BY clause cannot be used on the views.

The insert, update and delete operations on views depends on its definition.

Triggers

Triggers, as stated by Ullman and Widom (2008), are event-condition-action rules. They differ from the database constraints in three ways stated below

Triggers are activated only when any event already specified in the database occurs. They generally include insert, update or delete to a particular relation

Once the event activates the trigger, it tests a condition. If the condition fails, there will be no response to the event when the trigger occurs

If the condition is satisfied, DBMS performs the action relevant to the trigger. These actions might include enforcing referential integrity and prevent invalid transactions or any other sequence of database operations like gathering statistics on table access.

Triggers are very powerful elements in database system which if used properly can be very helpful and detection of data modification of any kind (object and/or data level). Triggers efficiency is based on the fact that each transaction passes through layer for detecting data modification.

Types of triggers

There are two types of triggers – DDL (Data Definition Language) Triggers and DML (Data Manipulation Language) Triggers (Azemovic and Music, 2010).

DML Triggers – are executed on any event of modification on the data tables, like the UPDATE, INSERT and DELETE statements. The INSTEAD OF trigger is an example of DML triggers and is defined on views.

DDL Triggers – are executed on specific events where changes are made to the data objects, like CREATE, DROP and ALTER statements.

Uses of Triggers

Triggers are defined using the CREATE TRIGGER command. Oracle uses PL/SQL to define triggers and SQL Server uses Transact-SQL to define triggers. The triggers are mainly used to: (Beynon-Davies, 2004)

Enforce complex business rules

Compute derived column values

Audit database changes

Implement advanced security requirements

Prevent invalid transactions

Modify table data when DML statements are executed on views

Enforce referential integrity across nodes in distributed database

Triggers across DBMS

Oracle uses PL/SQL programs to define trigger statements whereas SQL Server uses Transact-SQL. IBM DB2 uses SQL statements for creating triggers (Lungu and Ghencea, 2011).

There are different categories in SQL Server triggers –

Multiple triggers – includes creation of triggers on DDL, DML and LOGON events.

Recursive triggers – SQL Server allows recursive invocation of triggers

Nested triggers – the triggers can be nested to a maximum of 32 levels

The three components of Oracle triggers are –

Triggering event – includes all the changes made using the DDL and DML statements

Trigger constraint – is an optional component

Trigger action

Privilege Management

Privilege management uses the role-based access control (RBAC) model to manage the privileged between the users and roles. RBAC is an efficient model which can establish access control to database resources of large-scale enterprise. RBAC model attributes definition of role between user and access permission. It can assign roles according to security strategy, and assign process access to each role, assign role to users. Users access information resource with role indirectly. It is a many-to-many relationship. Each user has multiple roles, and each role can include multiple users. Each role may contain sufficient permissions, and each of permission belongs to many roles (Cheng and Jia-Hui, 2011). There are three components in the RBAC model – Core RBAC, Inherited RBAC and Constraint RBAC.

Elements of RBAC

There are five basic elements in the Core RBAC – User, role, object, operation and privilege. Key of RBAC is relationship between role and user, permission, which is user assign and permission assign. RBAC model design need to aim at the background above and specific design follows: (Jiang, et. al, 2010)

User – corresponds to the application users

Role – is a bundle of privileges that has its own name and can be issued to individual database users.

Object – corresponds to the resources shared on the database, which includes the data files, tables, views, etc.

Operation – is the output of the queries which are executed by the users. The queries can include both reading and writing into the data files.

Privileges – are permissions assigned to the users on order to perform the operations relevant to the users.

Privileges

A privilege is giving rights for the users to execute an SQL statement or allow the access to another user's object. The privileges are toolkits of rights that allow the database users to do tasks that are relevant to the roles to which they are assigned. The DBA (Database Administrator) is responsible for assigning specific privileges to specific users. Two types of privileges are available in DBMS: system privileges and object privileges (BCU Moodle, 2012).

System-level privileges

System privileges are general purpose security rights that apply to the user rather than to any one object in the database. Only the database administrator (DBA) or a user with admin-level rights can grant system level privileges. To issue a privilege, GRANT statement is used. For example, a user created in the authentication stage has to be allowed to connect to the database and further create tables. To remove the issued privileges, the REVOKE statement is used.

Object-level privileges

Object-level privileges are more specific and focus on a database objects like tables, views, or indexes. In this privilege, the rights can be given by the owner of that object and this strengthens the level of database security. If a privilege to any role is granted as "public", it can be executed by all other users. Also, sysdba cannot be granted as "public". In the object-level privileges, the users are issued privileges for inserting data into the tables and modifying the tables. GRANT (to issue privileges) and REVOKE (to withdraw privileges) are used.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now