Introduction To Encryption And Decryption

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

CHAPTER 1

INTRODUCTION

History

Encryption as provided in [27] is a process of converting messages, information, or data into a form unreadable by anyone except the intended recipient.

Encrypted data must be deciphered, or decrypted, before it can be read by the recipient. The root of the word encryption—crypt—comes from the Greek word kryptos, meaning hidden or secret. In its earliest form, people have been attempting to conceal certain information that they wanted to keep to their own possession by substituting parts of the information with symbols, numbers and pictures, this paper highlights in chronology the history of Cryptography throughout centuries. For different reason, humans have been interested in protecting their messages.

Threats to computer and network security increase with each passing day and come from a growing number of sources. No computer or network is immune from attack. A recent concern is the susceptibility of the power grid and other national infrastructure to a systematic, organized attack on the United States from other nations or terrorist organizations.

Encryption, or the ability to store and transmit information in a form that is unreadable to anyone other than intended persons, is a critical element of our defense to these attacks. Indeed, man has spent thousands of years in the quest for strong encryption algorithms.

Objective

This project will meet the following objectives:

To explore and implement an encryption and digital signature program to use with the aim of providing the user with a basic knowledge of the fundamental techniques of encryption and digital signature.

To provide the user with authentication, integrity, confidentiality and non-repudiation of the data.

To provide the user with an enhanced security of their data.

To provide the user with a way to easily and conveniently protect the data.

User friendly interface.

1.3 Scope

The scope of this project includes the following features:

Easy and convenient for encryption.

The program works for variable size of grid.

All the outliers can be detected by the program in a two phase manner.

The performance of the software package depends on the volume of data i.e. the number of objects in the dataset.

The inputs in terms of file size in a dataset have to be entered manually.

Efficient in terms of memory utilization.

Time saving.

1.4 Algorithm

There are various algorithms that are used so far for encryption of different file format like text file, audio file, images and videos.While various algorithms are available for encryption. Some of the algorithms that are used for encryption are RSA (Rivest, Shamir, Aldeman), DES (Data Encryption Standard).The algorithm used in this thesis work for encryption and decryption of text file is DSA (Digital Signature Algorithm) which is more efficient in terms of time and security. This leads to efficient encryption, which is more refined than the existing techniques.

To encrypt the text file we use DSA Algorithm. Initially, we take the ASCII vaule of each character of text file and placed in a grid of required size. The data in the grid read diagonally and written in a new grid of equal size from left to right row by row. The sender encrypt the text file using the private key and at the receiving end, the file is decrypted using the public key of the sender which ensure authentication, integrity and confidentiality.

1.5 Use of the project

Data Encryption helps to you protect the privacy of your email messages, documents and sensitive files. Encryption works with both - text information and files. We just have to select what we want to encrypt, and encryption and decryption helps us keep documents, private information and files in a confidential way. Encryption is also used to ensure the confidentiality of the file and documents from the adversary so that the files and documents are remained in a secure way.

Data encryption is also used to provide the security and safety of the files and other important documents from the opponent so that while sending the files or documents nobody else other than the recipient can see it.

This project has the similar mechanism to provide the security and safety of the files by using a public key algorithm named DSA.

Today’s the prominence of internet day to day increased a lot and the transfers of files and confidential information over the internet demands the security and safety of the files and this can be accomplished by using encryption and decryption. In current scenario, encryption and decryption are most widely used in every field like defence, banking,

CHAPTER 2

LITERATURE SURVEY

2.1 Introduction to Encryption and Decryption

Data encryption [9], [27] is the conversion of data into a form, called a ciphertext, that cannot be easily understood by unauthorized people. Decryption is the process of converting encrypted data back into its original form, so it is easily understood. Encryption is a mechanism for hiding information by turning readable text into a stream of gibberish in such a way that someone with the proper key can make it readable again.

About 1900 BC An Egyptian scribe used non-standard hieroglyphs in an inscription. Kahn lists this as the first documented example of encryption (written cryptography).

Encryption helps to you protect the privacy of your messages, documents and sensitive files.In its earliest form, people have been attempting to conceal certain information that they wanted to keep to their own possession by substituting parts of the information with symbols, numbers and pictures, this paper highlights in chronology the history of Cryptography throughout centuries. For different reason humans have been interested in protecting their messages.

2.1.1 Types of Encryption Algorithms

2.1.1.1 Symmetric Key Algorithms

Symmetric key encryption algorithms [18] use a single secret key to encrypt and decrypt data. You must secure the key from access by unauthorized agents because any party that has the key can use it to decrypt data. Secret-key encryption is also referred to as symmetric encryption because the same key is used for encryption and decryption. Secret-key encryption algorithms are extremely fast (compared to public-key algorithms) and are well suited for performing cryptographic transformations on large streams of data.

The diagram for Secret Key Algorithms below illustrates the mechanism in a well defined way.

Images symmetric key cryptography

Figure 2.1: Symmetric Key Cryptography

The figure 2.1 illustrates the secret key algorithm. This algorithm uses the same secret key at both sides i.e sender and receiver side. Both the parties required the same shared secret key. There are various symmetric key algorithms that are used now a day.

Brief definitions of the most common encryption techniques are given as follows:

DES (Data Encryption Standard), was the first encryption standard to be recommended by NIST (National Institute of Standards and Technology).DES is (64 bits key size with 64 bits block size).  DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it is always quoted as such. Every 8th bit of the selected key is discarded, that is, positions 8, 16, 24, 32, 40, 48, 56, 64 are removed from the 64 bit key leaving behind only the 56 bit key. Since that time, many attacks and methods recorded the weaknesses of DES, which made it an insecure block cipher [3],[10].

3DES is an enhancement of DES; it is 64 bit block size with 192 bits key size. In this standard the encryption method is similar to the one in the original DES but applied 3 times to increase the encryption level and the average safe time. It is a known fact that 3DES is slower than other block cipher methods [10].

RC2 is a block cipher with a 64-bits block cipher with a variable key size that range from 8 to128 bits. RC2 is vulnerable to a related-key attack using 234 chosen plaintexts [21]. RC2 is a 64-bit block cipher with a variable size key. Its 18 rounds are arranged as a source-heavy Feistel network, with 16 rounds of one type (MIXING) punctuated by two rounds of another type (MASHING).

Blowfish is block cipher 64-bit block - can be used as a replacement for the DES algorithm. It takes a variable length key, ranging from 32 bits to 448 bits; default 128 bits. Blowfish is unpatented, license-free, and is available free for all uses. Blowfish has variants of 14 rounds or less. Blowfish is successor to Twofish [3], [21].

AES is a block cipher .It has variable key length of 128, 192, or 256 bits; default 256. it encrypts data blocks of 128 bits in 10, 12 and 14 round depending on the key size. AES encryption is fast and flexible; it can be implemented on various platforms especially in small devices. Also, AES has been carefully tested for many security applications [11], [18]. AES is based on a design principle known as a substitution-permutation network, and is fast in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael which has a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits.

2.1.1.2 Asymmetric Key Algorithms

Asymmetric Key Algorithm has been said to be the most significant new development in cryptography in the last 300-400 years. Modern PKC (Public Key Cryptography) was first described publicly by Stanford University professor Martin Hellman and graduate student Whitfield Diffie in 1976.

Asymmetric key algorithm is also called Public-key Algorithm. Public-key cryptography is a fundamental and widely used technology around the world, and enables secure transmission of information on the internet and other communication systems; this concept was proposed in [15]. It is also known as asymmetric cryptography because the key used to encrypt a message differs from the used to decrypt it. In public-key cryptography, a user has a pair of cryptographic keys – a public-key and a private-key. The private-key is kept secret, while the public-key may be widely distributed and known for any user. Messages are encrypted with the recipient’s public key and can only be decrypted with the corresponding private key.

Generic PKC employs two keys that are mathematically related although knowledge of one key does not allow someone to easily determine the other key. One key is used to encrypt the plaintext and the other key is used to decrypt the ciphertext. The important point here is that it does not matter which key is applied first, but that both keys are required for the process to work as shown in figure 2.2. Because a pair of keys is required, this approach is also called asymmetric cryptography. In PKC, one of the keys is designated the public key and may be advertised as widely as the owner wants. The other key is designated the private key and is never revealed to another party. It is straight forward to send messages under this scheme. 

This diagram shows plaintext encrypted to ciphertext with the receiver's public key. The recipient decrypts the ciphertext with the receiver's private key.

Figure 2.2: Asymmetric Key Algorithm

Figure 2.2 shows plaintext encrypted with the receiver's public key and decrypted with the receiver's private key. Only the intended receiver holds the private key for decrypting the ciphertext. Note that the sender can also encrypt messages with a private key, which allows anyone that holds the sender's public key to decrypt the message, with the assurance that the message must have come from the sender.

With asymmetric algorithms, messages are encrypted with either the public or the private key but can be decrypted only with the other key. Only the private key is secret, the public key can be known by anyone. With symmetric algorithms, the shared key must be known only to the two parties. This is called the key distribution problem. Asymmetric algorithms are slower but have the advantage that there is no key distribution problem. There are various Public Key Algorithm [17] used in the current scenario for the security purpose while communicating over the internet.

The most obvious application of a public key encryption system is confidentiality; a message that a sender encrypts using the recipient's public key can be decrypted only by the recipient's paired private key (assuming, of course, that no flaw is discovered in the basic algorithm used). Another type of application in public-key cryptography is that of digital signature schemes. Digital signature schemes can be used for sender authentication and non-repudiation.

Public-key encryption has a much larger keyspace, or range of possible values for the key, and is therefore less susceptible to exhaustive attacks that try every possible key. A public key is easy to distribute because it does not have to be secured. Public-key algorithms can be used to create digital signatures to verify the identity of the sender of data. However, public-key algorithms are extremely slow (compared to secret-key algorithms) and are not designed to encrypt large amounts of data. Public-key algorithms are useful only for transferring very small amounts of data.

Public-key encryption uses a private key that must be kept secret from unauthorized users and a public key that can be made public to anyone. The public key and the private key are mathematically linked; data encrypted with the public key can be decrypted only with the private key, and data signed with the private key can be verified only with the public key. The public key can be made available to anyone; it is used for encrypting data to be sent to the keeper of the private key. Both keys are unique to the communication session. Public-key cryptographic algorithms are also known as asymmetric algorithms because one key is required to encrypt data while another is required to decrypt data.

Public-key cryptographic algorithms use a fixed buffer size whereas secret-key cryptographic algorithms use a variable-length buffer. Public-key algorithms cannot be used to chain data together into streams the way secret-key algorithms can because only small amounts of data can be encrypted. Therefore, asymmetric operations do not use the same streaming model as symmetric operations.

2.2 Digital Signature Algorithm

The Digital Signature Algorithm (DSA) is a United States Federal Government standard or FIPS for digital signatures. It was proposed by the National Institute of Standards and Technology (NIST) in August 1991 for use in their Digital Signature Standard (DSS), specified in FIPS 186, adopted in 1993. A minor revision was issued in 1996 as FIPS 186-1. The standard was expanded further in 2000 as FIPS 186-2 and again in 2009 as FIPS 186-3.

A digital signature or digital signature scheme [26] is a mathematical scheme for demonstrating the authenticity of a digital message or document. A valid digital signature gives a recipient reason to believe that the message was created by a known sender, and that it was not altered in transit. Digital signatures are commonly used for software distribution, financial transactions, and in other cases where it is important to detect forgery or tampering.

Digital signatures employ a type of asymmetric cryptography. For messages sent through a nonsecure channel, a properly implemented digital signature gives the receiver reason to believe the message was sent by the claimed sender. Digital signatures are equivalent to traditional handwritten signatures in many respects; properly implemented digital signatures are more difficult to forge than the handwritten type. Digital signature schemes in the sense used here are cryptographically based, and must be implemented properly to be effective. Digital signatures can also provide non-repudiation, meaning that the signer cannot successfully claim they did not sign a message, while also claiming their private key remains secret; further, some non-repudiation schemes offer a time stamp for the digital signature, so that even if the private key is exposed, the signature is valid nonetheless. Digitally signed messages may be anything represent able as a bit string: examples include electronic mail, contracts, or a message sent via some other cryptographic protocol.

The digital signature is attached to the message, and sent to the receiver. The receiver then does the following:

Using the sender's public key, decrypts the digital signature to obtain the message digest generated by the sender.

Uses the same message digest algorithm used by the sender to generate a message digest of the received message.

Compares both message digests (the one sent by the sender as a digital signature, and the one generated by the receiver). If they are not exactly the same, the message has been tampered with by a third party. We can be sure that the digital signature was sent by the sender (and not by a malicious user) because only the sender's public key can decrypt the digital signature (which was encrypted by the sender's private key; remember that what one key encrypts, the other one decrypts, and vice versa). If decrypting using the public key renders a faulty message digest, this means that either the message or the message digest are not exactly what the sender sent.

The Digital Signature Algorithm has been used in a great extent due to the features it provide like Authentication, Integrity and non-repudiation.(Authentication) Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user.

DS

Figure 2.3: Digital Signature

The importance of high confidence in sender authenticity is especially obvious in a financial context. Integrity can be defined as, in many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it. Non- repudiation, By this property an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.

The Digital Signature Algorithm includes two processes: Signature Generation and Signature Verification. Encryption is done at the Signature Generation process by using private key of the sender while decryption is done at the Signature verification process by using public key of the sender. The hash algorithm used in the algorithm for creating the message digest is SHA-I (Secure Hash Algorithm-I).The algorithm computes the following values during these two processes.

The algorithm is:

DSA Key Generation

1.choose a prime p,between 512 and 1024 bits in length.The number bits in p must be a multiple of 64.

2.Choose a 160 bit prime q in such a way that q divides (p-1).

3.create e1 to be the qth root of 1 modulo p (e1p =1 mod p).choose a element e0 and calculate e1=e0(p-1)/q mod p.

4. choose d as private key and calculate e2=e1d mod p.

5.public key is (e1,e2,p,q); private key is (d).

M:Message r;Random Secret h(M):Message Digest

S1,S2: Signature d: private key V:Verification

(e1,e2,p,q): Public key

DSA Signature Creation

1.Choose a random number r (1<=r<=q).

2.Calculate signature S1 =( e1r mod p)mod q.

3. Create a digest of message h(M).

4. calculate signature S2 =(h(M)+d S1 )r-1mod q.

5.Send M, S1 and S2

DSA Signature Verification

1.Check to see if 0< S1<q.

2.Check to see if 0<S2<q.

3.Calculate V =[(e1h(M)S2-1 e2S1S2-1)modp]modq.

4.If S1 is congruent to V ,the message is accepted ;otherwise it is rejected.

Untitled

Figure 2.4: Digital Signature Algorithm

2.2.1 Uses of Digital Signature:

As organizations move away from paper documents with ink signatures or authenticity stamps, digital signatures can provide added assurances of the evidence to provenance, identity, and status of an electronic document as well as acknowledging informed consent and approval by a signatory. The United States Government Printing Office publishes electronic versions of the budget, public and private laws, and congressional bills with digital signatures. Universities including Penn State, University of Chicago, and Stanford are publishing electronic student transcripts with digital signatures. Below are some common reasons for applying a digital signature to communications:

Authentication, Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request could be a grave mistake.

Integrity, In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it. (Some encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a message is digitally signed, any change in the message after signature will invalidate the signature. Furthermore, there is no efficient way to modify a message and its signature to produce a new message with a valid signature, because this is still considered to be computationally infeasible by most cryptographic hash functions (see collision resistance).

Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.

2.3 Security Issues

The state of security in the real world is lucidly discussed in [23]. Though there have been many intellectual successes in the areas of access control, information flow based multilevel security, public key cryptography and the development of esoteric cryptographic protocols, the security of millions of the deployed systems is such that any determined attacker can break in and compromise the information infrastructure.

The security issues [7],[27] include security weaknesses in the operating systems of attached computers as well as vulnerabilities in Internet routers and other network devices. These include denial of service attacks; IP spoofing, in which intruders create packets with false IP addresses and exploit applications that use authentication based on IP; and various forms of eavesdropping and packet sniffing in which attackers read transmitted information, including logon information and database contents. Over time, the attacks on Internet and Internet – attached systems have grown more sophisticated while the amount of skill and knowledge required to mount an attack has declined. Attacks have become more automated and can cause greater amounts of damage.

Cryptography has long been of interest to intelligence gathering and law enforcement agencies. Secret communications may be criminal or even treasonous. Because of its facilitation of privacy, and the diminution of privacy attendant on its prohibition, cryptography is also of considerable interest to civil rights supporters. Accordingly, there has been a history of controversial legal issues surrounding cryptography, especially since the advent of inexpensive computers has made widespread access to high quality cryptography possible.

2.4 Problem Statement

Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. Now a day, there are various security services which are used for secure communication over Internet and other networks are:

2.4.1 Authentication

Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity which is provided in [26] especially obvious in a financial context.

2.4.2 Integrity

In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it.

2.4.3 Non-repudiation

Non-repudiation[15],[26] or more specifically non-repudiation of origin, is an important aspect of digital signatures. By this property, an entity that has signed some information cannot at a later time deny having signed it. Similarly, access to the public key only does not enable a fraudulent party to fake a valid signature.

Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Encrypting data in transit also helps to secure it as it is often difficult to physically secure all access to networks. Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a Message Authentication Code (MAC) or a Digital Signature. Standards and cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single slip-up in system design or execution can allow successful attacks. If an adversary can find the message in between then he/she may fully responsible for the tampering of the message. So, the sender must use the Digital Signature and hash algorithm to provide security of the message and hence it can’t be read by any other party and the sending of the message may be successful.

2.5 Proposed System

In this thesis work, the technique has been implemented for text files with grid size of length 32X32. Initially it is performed on text files with grid size of length 32X32. The key is generated randomly for each session during encryption and decryption of the DSA algorithm in a grid size of length 32X32. Due to encrypting the file in more than twice fold; its security is enhanced to a great extent.

In this work Digital Signature Algorithm is used for the encryption and decryption for the text files which provides various features like authentication, integrity and non-repudiation of the text files (plaintext). The algorithm works efficiently on grid size-32X32 as number of padding required is less as compared to grid size-64 and grid size-128. The efficiency of the work performed with DSA algorithm is also compared with the reported work of RSA algorithm in terms of security and encryption and decryption time.

Further enhancement can be made as this technique can be applied on images, audio and video using some public key algorithm with variable grid size i.e. with grid size-32, grid size -64 and grid size-128.Instead of taking the text file as plaintext, images or audio or video file can be taken as plaintext and then the proposed technique can be applied for encryption and decryption. As the grid size increases, security aspect is also increases (as the size of the key i.e. private and public key also increases) but the complexity will get diminished due to the increased grid size and the generated delay, although algorithm works efficiently.

2.6 Advantages of Presented System

Now a day, Information Security is becoming too important. Encryption and Decryption is playing major role in authenticating the resource.

While the field of cryptography has been studied extensively from the very earlier, most of the work has concentrated on discovery of security in terms of encryption and decryption of the messages when communicated over non secure channel. As in this work, it has been mentioned that various algorithms have been used from the very beginning for encryption and decryption of the files, messages when communicated over internet to provide the enhanced security so that it is not attacked by any adversary as provided in [1].

That means the biggest advantage is that the security of the files has been increased too many folds by the use of cryptographic algorithms. The features like authentication, confidentiality, non-repudiation and integrity are provided by the DSA algorithms that are used for encryption and decryption of the text files and hence the security aspect also enhanced. The adversary will not be able to attack the messages or files in between and if possible unable to alter it. Due to the various features provided by the DSA algorithm in this work; Security is enhanced a lot that brings the whole system more reliable.

CHAPTER 3

SYSTEM ANALYSIS AND PLANNING

System analysis and design refers to the process of examining a business situation with the intent of improving it through better procedures and methods. System development can generally be thought of as having two major components: System Analysis and System Design. System design is the process of planning a new system or replace or complement an existing system. But before this planning can be done, we must thoroughly understand the existing system and determine how computers can best be used to make its operation more effective. System analysis, then, is the process of gathering and interpreting facts, diagnosing problems and using the information to recommend improvement to the system.

3.1. Requirement Analysis

Requirement analysis in system engineering and software engineering, encompasses those tasks that go into determining the needs or conditions to meet for a new or altered product, taking account of the possibly conflicting requirements of the various stakeholders, such as beneficiaries or users.

Requirement analysis is critical to the success of a development project. Requirements must be documented, actionable, measurable, testable, related to identified business needs or opportunities, and defined to a level of detail sufficient for system design.

Requirements are a description of how a system should behave or a description of system properties or attributes. It can alternatively be a statement of ‘what’ an application is expected to do. The Software Requirements Analysis Process covers the complex task of eliciting and documenting the requirements of all these users, modeling and analyzing these requirements and documenting them as a basis for system design.

System Analysis & Control (Balance)

PROCE

S

S

I

N

P

U

T

Requirement Analysis

Requirement Loop

Functional Analysis & Allocation

Design Loop

Design Synthesis

PROCESS OUTPUT

Figure 3.1: Requirement Analysis Process

Steps in Requirements Analysis Process

Fix system boundaries

Identify the customer

Requirements elicitation

Requirements Analysis Process

Requirements Specification

Requirements Management

3.1.2 Requirements Analysis

1. Brainstorming Session

Brainstorming is a group creativity technique designed to generate a large number of ideas for the solution of a problem Although brainstorming has become a popular group technique, when applied in a traditional group setting, researchers have not found evidence of its effectiveness for enhancing either quantity or quality of ideas generated. Because of such problems as distraction, social loafing, evaluation apprehension, and production blocking, conventional brainstorming groups are little more effective than other types of groups, and they are actually less effective than individuals working independently. There are four basic rules in brainstorming. These are intended to reduce social inhibitions among groups’ members, stimulate idea generation, and increase overall creativity of the group.

Focus on quantity: This rule is a means of enhancing divergent production, aiming to facilitate problem solving through the maxim, quantity breeds quality. The assumption is that the greater the number of ideas generated, the greater the chance of producing a radical and effective solution.

Withhold criticism: In brainstorming, criticism of ideas generated should be put 'on hold'. Instead, participants should focus on extending or adding to ideas, reserving criticism for a later 'critical stage' of the process. By suspending judgment, participants will feel free to generate unusual ideas.

Welcome unusual ideas: To get a good and long list of ideas, unusual ideas are welcomed. They can be generated by looking from new perspectives and suspending assumptions. These new ways of thinking may provide better solutions.

Combine and improve ideas: Good ideas may be combined to form a single better good idea, as suggested by the slogan "1+1=3". It is believed to stimulate the building of ideas by a process of association.

2. SRS Document

A Software Requirements Specification (SRS) is a complete description of the behavior of the system to be developed. It includes a set of use cases that describe all the interactions the users will have with the software. Use cases are also known as functional requirements. In addition to use cases, the SRS also contains non-functional (or supplementary) requirements. Non-functional requirements are requirements which impose constraints on the design or implementation (such as performance requirements, quality standards, or design constraints).

Goals of SRS are:

It provides feedback to the customer. An SRS is the customer's assurance that the development organization understands the issues or problems to be solved and the software behavior necessary to address those problems.

It decomposes the problem into component parts. The simple act of writing down software requirements in a well-designed format organizes information, places borders around the problem, solidifies ideas, and helps break down the problem into its component parts in an orderly fashion.

It serves as an input to the design specification. Therefore, the SRS

must contain sufficient detail in the functional system requirements so that a design solution can be devised.

3.2 Feasibility Study

Eight steps are involved in the feasibility analysis. They are:

Form a project team and appoint a project leader.

Prepare system flowcharts.

Enumerate potential proposed systems.

Define and identify characteristics of proposed system.

Determine and evaluate performance and cost effectiveness of each proposed system.

Weight system performance and cost data.

Select the best proposed system.

Prepare and report final project directive to management

3.2.1 Economic Feasibility

Economic analysis is the most frequently used technique for evaluating the effectiveness of a proposed system. More commonly known as cost / benefit analysis; in this procedure we determine the benefits and savings that are expected from a proposed system and compare them with costs. We found the benefits outweigh the costs; we take a decision to design and implement the new proposed system.

3.2.2 Technical Feasibility

This is concerned with specifying equipment and software that will successfully satisfy the user requirement. The technical needs of the system may vary considerably, but might include:

The facility to produce outputs in a given time.

Response time under certain conditions.

Ability to process a certain volume of transaction at a particular speed.

Facility to communicate data to distant location.

After examining technical feasibility, we give more importance to the configuration of the system than the actual make of hardware. The configuration gives the complete picture about the system's requirements: Ten to twelve workstations are required; these units should be interconnected through LAN so that they could operate and communicate smoothly. They should have enough speeds of input and output to achieve a particular quality of printing.

3.2.3. Behavioral Feasibility

People are inherently resistant to change, and computers have been known to facilitate change. An estimate should be made of how strong a reaction the user staff is likely to have toward the development of a computerized system. It is common knowledge that computer installations have something to do with turnover, transfers, retraining, and change in employee job status. Therefore, it is understandable that the introduction of a candidate system requires special effort to educate, sell, and train the staff on new ways of conducting business. In our safe deposit example, three employees are more than 50 years old and have been with the bank over 14 years, four years of which have been in safe deposit. The remaining two employees are in their early thirties. They joined safe deposit about two years before the study. Based on data gathered from extensive interviews, the younger employees want the programmable aspects of safe deposit (essentially billing) put on a computer. Two of the three older employees have voiced resistance to the idea. Their view is that billing is no problem. The main emphasis is customer service-personal contacts with customers. The decision in this case was to go ahead and pursue the project.

3.2.4 Time Feasibility

Time feasibility is a determination of whether a proposed project can be implemented fully within a stipulated time frame. If a project takes too much time it is likely to be rejected.

System Planning

The purpose of project planning is to identify the scope of the project, estimate the work involved, and create a project schedule. Project planning begins with requirements that define the software to be developed. The project plan reflects the current status of all project activities and is used to monitor and control the project. The Project Planning tasks ensure that various elements of the Project are coordinated and therefore guide the project execution Project planning is crucial to the success of the project. Careful planning right from the beginning of the project can help to avoid costly mistakes. It provides an assurance that the project execution will accomplish its goals on schedule and within budget.

3.3.1 Pert Chart

PERT ESTIMATION TECHNIQUE: PERT was developed to take account of the uncertainty surrounding estimates of task duration .It was developed in an environment of high risk and expensive projects. Following is the PERT chart for this project: (activity duration is in weeks).

A PERT event: is a point that marks the start or completion of one or more tasks. It consumes no time, and uses no resources. It marks the completion of one or more tasks, and is not "reached" until all of the activities leading to that event have been completed.

A predecessor event: an event (or events) that immediately precedes some other event without any other events intervening. It may be the consequence of more than one activity.

A successor event: an event (or events) that immediately follows some other event without any other events intervening. It may be the consequence of more than one activity.

A PERT activity: is the actual performance of a task. It consumes time, it requires resources (such as labor, materials, space, machinery), and it can be understood as representing the time, effort, and resources required to move from one event to another. A PERT activity cannot be completed until the event preceding it has occurred.

Optimistic time (A): the minimum possible time required to accomplish a task, assuming everything proceeds better than is normally expected

Pessimistic time (B): the maximum possible time required to accomplish a task, assuming everything goes wrong (but excluding major catastrophes).

Most likely time (M): the best estimate of the time required to accomplish a task, assuming everything proceeds as normal.

Pert Activity Time Estimates

Table 3.1: Pert Table Estimates (PTE)

OPTIMISTIC TIME(A)

MOST LIKELY TIME(M)

PESIMISTIC TIME(B)

EXPECTED DURATION(tm)

(A+4M+B)/6

FEASIBILITY STUDY

2

3

3.5

2.96

USER REQUIREMENTS

1

2

1.5

1.75

ANALYSIS

2

3

3

2.833

SYSTEM DESIGN

3

4

5

4

PROGRAM DESIGN

3

4

5

4

CODING

4

5

6

5

TESTING

1

1.5

2

1.5

IMPLEMENTATION

0.5

0.5

0.5

0.5

3.3.2 Critical Path

The chain of tasks determines the duration of the project. For this, most likely time estimates for individual tasks by applying statistical models and calculate boundary times that define a time "window" for a particular task is established.

Boundary time calculation can vary according to software project scheduling. Boundary time calculation helps in determining the critical path and provides the manager with a quantitative method for evaluating progress as tasks are completed. There are some important boundary times that may be discerned from a PERT as follows: -

The earliest time that a task can begin when all preceding tasks are completed in the shortest possible time.

The latest time for task initiation before the minimum project completion time is delayed.

The earliest finish- the sum of the earliest start and the task duration.

CHAPTER 4

SYSTEM DESIGN

4.1 Modules and their Description

This thesis work contains the following four modules:

Grid Reading

Grid Writing

Encryption

Decryption

4.1.1 Grid Reading

Grid Reading is nothing but it is the generation of combination of ASCII values of the plaintext in a grid size of 32X32. It means that we take the plaintext and get the 8-bit ASCII values of each character in the plaintext and fill the grid of size 32X32. The numbers of grids are generated according to the size of the plaintext. The size of the grid is fixed for a file for each session. If the grid is not filled with the given data from the file then the grid can be padded by adding 0’s at the end. The grid reading can be shown in table 4.1 as below:

Table 4.1:Grid Reading

4.1.2 Grid Writing

In Grid Writing the data is read from diagonally as above format (grid reading) and is transposed by writing the data in an equal sized grid from left to right row by row. The main point is that we have implemented this technique on 32X32 grid size. The grid writing can be shown in table 4.2 as follows.

Table 4.2: Grid Writing

4.1.3 Encryption

Data encryption [6],[9] is the conversion of data into a form, called a cipher text that cannot be easily understood by unauthorized people. Decryption is the process of converting encrypted data back into its original form, so it is easily understood. Encryption is done by using public key algorithm called DSA with the use of private key of the sender and data is converted into unreadable form called Cipher text.

4.1.4 Decryption

Decryption [18],[27] is the process of converting encrypted data back into its original form called plaintext, so that it is easily understood. Decryption is done by using the same algorithm as used in encryption with the public key of the sender which provides authentication, confidentiality and non-repudiation. Decryption is the reverse process of encryption.

4.2 Design

Software design is a process of problem solving and planning for a software solution. After the purpose and specifications of software are determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithm implementation issues as well as the architectural view. Software design can be considered as putting solution to the problem(s) in hand using the available capabilities. Hence the main difference between Software analysis and design is that the output of the analysis of a software problem will be smaller problems to solve and it should not deviate so much even if it is conducted by different team members or even by entirely different groups. But since design depends on the capabilities, we can have different designs for the same problem depending on the capabilities of the environment that will host the solution (whether it is some OS, web , mobile or even the new cloud computing paradigm). The solution will depend also on the used development environment (Whether you build a solution from scratch or using reliable frameworks or at least implement some suitable design patterns).

4.3 UML Diagram

Unified Modeling Language (UML) is a standardized general-purpose modeling language in the field of software engineering. UML includes a set of graphical notation techniques to create abstract models of specific systems. The Unified Modeling Language (UML) is an open method used to specify, visualize, construct and document the artifacts of an object-oriented software-intensive system under development.

UML offers a standard way to write a system's blueprints, including conceptual components such as:

Actors

Business processes

System's components and activities as well as concrete things such as:

Programming language statements

Database schemas and

Reusable software components.

4.3.1 Activity Diagram

Activity diagrams are a loosely defined diagram technique for showing workflows of stepwise activities and actions, with support for choice, iteration and concurrency. In the Unified Modeling Language, activity diagrams can be used to describe the business and operational step-by-step workflows of components in a system. An activity diagram shows the overall flow of control. They consist of:

Initial node.

Activity final node.

Activities

The starting point of the diagram is the initial node, and the activity final node is the ending.

Start

Plaintext

Grid Reading

Grid Writing

Encryption

Decryption

Plaintext

Figure 4.1: Activity diagram

4.4 Data Flow Diagram

Data flow diagram (DFD) is used to show how data flows through the system and the processes that transform the input data into output. Data flow diagrams are a way of expressing system requirements in a graphical manner. DFD represents one of the most ingenious tools used for structured analysis. It is also known as a bubble chart.

The DFD at the simplest level is referred to as the ‘CONTEXT ANALYSIS DIAGRAM’. These are expanded by level, each explaining its process in detail. Processes are numbered for easy identification and are normally labeled in block letters. Each data flow is labeled for easy understanding.

Private Key

Encryption Algorithm

Cipher text

Plaintext

Cipher text

Decryption Algorithm

Plaintext

Public Key

Figure 4.2: DFD of Encryption and Decryption module

Figure 4.3: DFD of DSA Algorithm



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now