High Capacity Data Hiding System Using Bpcs

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

BPCS STEGANOGRAPHY

Chapter1.

Introduction

Information security issues have been paid more and more attention, and information hiding has become a hotspot in the research field of information security, in recent years. Through embedding unnoticeable secrets into digital media signals such as images, audio and video, information hiding realizes the function of copyright protection and secret communication. In computer science, information hiding is the principle of segregation of the design decisions in a computer program that are most likely to change, thus protecting other parts of the program from extensive modification if the design decision is changed. The protection involves providing a stable interface which protects the remainder of the program from the implementation.

There are two major branches of information hiding, Steganography and Watermarking.

Digital watermarking is the process of embedding information into a digital signal in a way that is difficult to remove. The term is derived from a process used since 1282 in the production of paper bearing a watermark for visible identification. In digital watermarking, the signal may be audio, pictures, or video. If the signal is copied, then the information also is carried in the copy. A signal may carry several different watermarks at the same time. There are visible and invisible types of watermarking.

The main construction depends on steganography and makes this method more powerful against vulnerabilities uses some techniques. In the introduction, discussion is on cryptography and steganography. The second chapter gives information about the work in steganography with analysis to find better steganography technique. The chapter 3 proposes the method which has high embedding capacity with high level security. The conclusion is made in the chapter 4.

1.1 Introduction to Steganography

1.1.1 What is Steganography?

The word steganography is of Greek origin and means "concealed writing" from the Greek words steganos meaning "covered or protected", and graphei meaning "writing". Steganography is the art and science of writing hidden messages in such a way that no one apart from the intended recipient knows of the existence of the message. The goal of steganography is to avoid drawing suspicion to the transmission of secret message.

In modern terms, steganography is usually implemented computationally, where cover works such as text files, images, audio files, and video files are tweaked in such a way that a secret message can be embedded within them.

1.1.2 How is Steganography Used?

In terms of development, Steganography is comprised of two algorithms, one for embedding and other one for extracting. The embedding process is concerned with hiding a secret message within a cover work, and is the most carefully constructed process. A great deal of attention is paid to ensuring that the secret message goes unnoticed if a third party were to intercept the cover work. The extracting process is traditionally a much simpler process as it is simply an inverse of the embedding process, where the secret message is revealed at the end. The entire process of steganography for images can be presented graphically in figure 1.1.

Embedding

Communication Channel

Extraction

Secret Message

Stegogramme

Cover work

Stegosystem Decoder

Stegogramme Estimated

secret message

Stegosystem encoder

Key

Fig 1.1: The Process of Steganography

Figure 1.1 shows one example of how steganography might be used in practice. Two inputs are required for the embedding process:

1. Secret message - usually a text file that contains the message you want to transfer

2. Cover work (image) - used to construct a stegogramme that contains a secret message

The next step is to pass the inputs through the stego-system encoder, which will be carefully engineered to embed the message within an exact copy of the cover work, such that minimum distortion is made; the lower the distortion, the better the chances of undetectability. The stego-system encoder will usually require a key to operate, and this key would also be used at the extraction phase. This is a security measure designed to protect the secret message. Without a key, it would be possible for someone to correctly extract the message if they managed to get hold of the embedding or extracting algorithms. However, by using a key, it is possible to randomize the way the stegosystem encoder operates, and the same key will need to be used when extracting the message so that the stegosystem decoder knows which process to use. This means that if the algorithm falls into enemy hands, it is extremely unlikely that they will be able to extract the message successfully. The resulting output from the stegosystem encoder is the stegogramme, which is designed to be as close to the cover work as possible, except it will contain the secret message. This stegogramme is then sent over some communications channel along with the key that was used to embed the message. Both the stegogramme and the key are then fed into the stego-system decoder where an estimate of the secret message is extracted. Note that we can only ever refer to the output of the extraction process as an estimate because when the stegogramme is sent over a communications channel, it may be subjected to noise that will change some of the values. Therefore, we can never be sure that the message extracted is an exact representation of the original. Also, the recipient will obviously never know what the original message was, and so they have nothing to compare it to when it is extracted.

1.1.3 Data Hiding Techniques

There are three different approaches that can be used to hide information in a cover object. One of that is discussed here.

Substitution Algorithms : There is an increased interest in using digital images as cover objects for the purpose of steganography because of the proliferation of digital images over the Internet and given the high degree of redundancy present in a digital representation of an image (despite compression). There has been a number of image steganography technique algorithms based on the substitution approach. They can be categorized into two types: spatial domain techniques and transform domain techniques. In the spatial domain approach, the cover image pixels are directly used to inscribe bits of the secret data whereas in the frequency domain, the cover image first undergoes a transformation into its frequency domain and then its transformed coefficients are altered to embed the secret information.

i) Spatial Domain Algorithm: Spatial domain algorithms embed data by substituting carefully chosen bits from the cover image pixels with secret message bits. LSB-based techniques are the most widely known steganography algorithms, which work by replacing the least significant bits of an image pixel. These modifications could be interpreted as random noise, which should not have any perceptible effect on the image. That is usually an effective technique in cases where the LSB substitution does not cause significant quality degradation, such as in 24-bit bitmaps. Some algorithms change LSB of pixels visited in a random walk, others modify pixels in certain areas of images, or simply increment or decrement of the pixel value. Our proposed technique is based on LSB technique.

For example, to hide the letter "a" (ASCII code 97 that is 01100001) inside eight bytes of a cover, we set the LSB of each byte like this:

10010010 01010011 10011011 11010010

10001010 00000010 01110010 00101011

The application decoding the cover reads the eight Least Significant Bits of those bytes to re-create the hidden byte—that is 0110001—the letter "a."

ii) Transform Domain Algorithm: Transform domain techniques hide data in mathematical functions that are in compression algorithms. Discrete Cosine Transform (DCT) technique is one of the commonly used transform domain algorithm for expressing a waveform as a weighted sum of cosines. The data is hidden in the image files by altering the DCT coefficient of the image. Specifically, DCT coefficients which fall below a specific threshold are replaced with the secret bits. Taking the inverse transform will provide the stego image. The extraction process consists in retrieving those specific DCT coefficients.

1.2 Introduction to Cryptography

1.2.1 What is Cryptography?

Cryptography, a word with Greek origins, means "secret writing". However, we use the term to refer to the science and art of transforming messages to make them secure and immune attacks. The main goals of modern cryptography can be seen as user authentication, data authentication (data integrity and data origin authentication), non-repudiation of origin, and data confidentiality. There are related security measures to take care of the threats in information hiding. The cryptography is a technique with which any one can guess that something is encrypted means there is secrete message available in respective channel.

1.2.2 How is Cryptography Used?

Cryptographic method is depending on pubic key cryptography or Symmetric key cryptosystem. Figure 1.2 shows this procedure.

(Plaintext) (Plaintext)

Key Shared Secret Channel Key

Hello world!

#%guiyrwkmn,:?

Hello world!

Decryption

Encryption

Figure 1.2: The Process of Cryptography

1.2.3 Types of cryptography

Secret Key Cryptography/Symmetric key cryptography

With secret key cryptography, a single key is used for both encryption and decryption. The sender uses the key to encrypt the plaintext and sends the cipher text to the receiver. The receiver applies the same key to decrypt the message and recover the plaintext. Because a single key is used for both functions, secret key cryptography is also called symmetric encryption.

With this form of cryptography, it is obvious that the key must be known to both the sender and the receiver; that is the secret. The biggest difficulty with this approach, of course, is the distribution of the key.

Some secret key cryptography algorithms that are in use today include Data Encryption Standard (DES), Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), Electronic Codebook (ECB), Cipher Block Chaining (CBC), Cipher Feedback (CFB) etc.

Public-Key Cryptography/Asymmetric cryptography

Public-key cryptography has been said to be the most significant new development in cryptography. PKC depends upon the existence of so-called one-way functions, or mathematical functions that are easy to compute whereas their inverse function is relatively difficult to compute. Generic PKC employs two keys that are mathematically related although knowledge of one key does not allow someone to easily determine the other key. One key is used to encrypt the plaintext and the other key is used to decrypt the ciphertext. No matter which key is applied first, but that both keys are required for the process to work (Figure 1 (B)). Because a pair of keys is required, this approach is also called asymmetric cryptography.

Some public-key cryptography algorithms that are in use today for key exchange or digital signatures include RSA (Ronald Rivest, Adi Shamir, and Leonard Adleman), Diffie-Hellman, Digital Signature Algorithm (DSA), ElGamal, Elliptic Curve Cryptography (ECC), Public-Key Cryptography Standards (PKCS), etc.

Hash function:

Hash functions, also called message digests and one-way encryption, is one of cryptographic types, in some sense, use no key. Instead, a fixed-length hash value is computed based upon the plaintext that makes it impossible for either the contents or length of the plaintext to be recovered. Hash algorithms are typically used to provide a digital fingerprint of a file's contents often used to ensure that the file has not been altered by an intruder or virus. Hash functions are also commonly employed by many operating systems to encrypt passwords. Hash functions, then, provide a measure of the integrity of a file. Hash algorithms that are in common use today include, Message Digest (MD) algorithms, MD2, MD4, MD5, Secure Hash Algorithm (SHA), etc.

1.3 Digital image processing

Digital image processing is the process of manipulating images by computer with the purpose of improving image quality or extracting useful information. Digital image consists of discrete picture elements called pixels. Associated with each pixel is a number represented as DN (Digital Number), which depicts the average radiance of relatively small area within a scene various operations can be classified as image enhancement, segmentation and restoration.

1.4 Image Data

Image data block of bytes describes the image, pixel by pixel. Pixels data are stored upside down with respect to normal raster scan order starting in the lower left corner, going from left to right, and then row by row from the bottom to the top of the image. The message should be stored in this data in such a way that nobody can make differentiation between data and hidden message. The image data for gray image per pixel is 8 bits and for color image, it is 24 bits. The image can be separated in bit levels and information can be easily hidden in LSB (Least Significant Bits).

1.5 Compression method

Compression is useful because it helps reduce the consumption of expensive resources, such as hard disk space or transmission bandwidth. On the downside, compressed data must be decompressed to be used, and this extra processing may be detrimental to some applications. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (if using a lossy compression scheme), and the computational resources required to compress and uncompress the data.

Lossless compression algorithms usually exploit statistical redundancy in such a way as to represent the sender's data more concisely without error. Lossless compression is possible because most real-world data has statistical redundancy. For example, in English text, the letter 'e' is much more common than the letter 'z', and the probability that the letter 'q' will be followed by the letter 'z' is very small. Another kind of compression, called lossy data compression or perceptual coding, is possible if some loss of fidelity is acceptable. Generally, a lossy data compression will be guided by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to variations in color. JPEG image compression works in part by "rounding off" some of this less-important information. Lossy data compression provides a way to obtain the best fidelity for a given amount of compression.

The proposed system work to hide data which should not be loss single digit. So preference tilts toward Lossless compression. Different types of lossless compression are

Variable length coding contains Huffman, Binary code, Truncated Huffman, B2-code, Binary shift, Huffman shift and arithmetic coding [17] and other one is LZW algorithm.

Chapter2.

LITERATURE SURVEY

2.1 Least Significant Bit Replacement

In image steganography almost all data hiding techniques try to alter insignificant information in the cover image. Least significant bit (LSB) insertion is a common, simple approach to embedding information in a cover image. For instance, a simple scheme proposed, is to place the embedding data at the least significant bit (LSB) of each pixel in the cover image. The altered image is called stego-image [1].

To a computer an image is simply a file that shows the different colors and intensities of light on different areas of an image. For hiding information inside images usually Least Significant Bit (LSB) method is used. In the LSB method the 8th bit of every byte of the carrier file is substituted by one bit of the secret information. This method works fine in the image carriers because if the least significant bit is changed from 0 to 1 or vice versa, there is hardly any change in the appearance of the color of that pixel. The LSB method usually does not increase the file size, but depending on the size of the information that is to be hidden inside the file, the file can become noticeably distorted. In injection method simply the secret information is injected wholly in an appropriate location of the carrier file. The main problem with this method is that it can significantly increase the size of the carrier file.

"On the Limits of Steganography" [20] was proposed by Anderson and Petitcolas who present number of attacks on information hiding scheme and suggest improved embedding efficiency and public key steganography.

Babita Ahuja and Manpreet Kaur proposed comparison of high capacity filter with low capacity filter produces steganography technique to embed the data [1]. (N, 1) Secret Sharing Approach Based on Steganography with Gray Digital Images method is proposed by Jinsuk Baekl, Cheonshik Kim, Paul S. Fisherl, and Hongyang Cha[14] which consists of an embedding procedure and an extraction procedure. This proposed scheme basically uses an Exclusive-OR (XOR) operation and a binary-to-gray code conversion. Chin-Chen Chang, Hsien-Wen Tseng[15] are suggested Data hiding in images by hybrid LSB substitution technique. It is the method of Optimal LSB substitution with OPAP. Another novel approach is to Develop a Secure Image based Steganographic Model using Integer Wavelet Transform [16].

2.2 Visual cryptographic Steganography:

Visual steganography[2] is one of the most secure forms of steganography available today. It is most commonly implemented in image files (within lowest bit of noisy images). An advanced system of encrypting data combines the features of cryptography, steganography. This system will be more secured than any other these techniques alone and also as compared to steganography and cryptography combined systems.

In cryptographic system, there are following types:

1. THE JOINT KEY CRYPTOGRAPHY (Symmetric key cipher): It uses a common key for encryption and decryption of the message. This key is shared privately by the sender and the receiver. The sender encrypts the data using the joint key and then sends it to the receiver who decrypts the data using the same key to retrieve the original message. Joint key cipher algorithms are less complex and execute faster as compared to other forms of cryptography but have an additional need to securely share the key. In this type of cryptography the security of data is equal to the security of the key. In other words, it serves the purpose of hiding a smaller key instead of the huge chunk of message data.

2. THE PUBLIC KEY CRYPTOGRAPHY (asymmetric key cipher) is a technique that uses a different key for encryption as the one used for decryption. Public key systems require each user to have two keys – a public key and a private key (secret key). The sender of the data encrypts the message using the receiver’s public key. The receiver then decrypts this message using his private key. This technique eliminates the need to privately share a key as in case of symmetric key cipher. Asymmetric cryptography is comparatively slower but more secure than symmetric cryptography technique. The public key cryptography is a fundamental and most widely used technique, and is the approach which underlies Internet standards such as Transport Layer Security (TLS) (successor to SSL). The most common algorithm used for secret key systems is the Data Encryption Algorithm (DEA) defined by the Data Encryption Standard (DES) and Advanced Encryption Standard.

3. A HYBRID CRYPTOSYSTEM is a more complex cryptography system that combines the features of both joint and public key cryptography techniques. We shall use traditional public key cryptography techniques to covert the message into a cipher. For embedding the cipher into images, a modified joint key technique will be used [17]. This technique will be discussed in brief next.

2.3 A Hybrid Approach to Steganography Embedding

This proposed approach [11] is suggested to encrypt the secret message by using a new cipher which is extended from Hill cipher. Then the cipher text of the secret message is embedded into the carrier image in 6th, 7th and 8th bit locations of the darkest and brightest pixels. Here 8th bit means the least significant bit in a byte (i.e. called LSB). Here brightest pixels means having a gray value in the range 224 to 255 in 8 bit gray scale and darkest pixels means having gray value in the range 0 to 31 in 8 bit gray scale. As these darkest and brightest pixels are spread across the image randomly, the intruder will not be able to identify those pixels. After embedding the resultant image will be sent to the receiver, the receiver will apply the reverse operation what the sender has done and get the secret information.

Basic idea of the proposed method is given in the following algorithm:

Step1 - Convert the color image to binary

Step2 -Apply new hill cipher (block hashed by 128 bit key) to encrypt the secret information. Now we got the cipher.

Step3 - Convert the cipher text to binary.

Step4 - Make sure that the length of carrier image is sufficient enough to conceal the cipher text.

Step5 - Embed the cipher into the image as per the embedding technique.

Step6 - Now send the resultant image to the receiver

Step7 - Receiver applies the reverse process what sender has done and gets the hidden information.

2.4 Pixel-value differencing steganography method

This method [4] is proposed by Han-ling ZHANG, Guang-zhi GENG, Cai-qiong Xiong. They suggested to increase the capacity of the hidden secret information and to provide a stego-image imperceptible for human vision. A novel steganographic approach based on pixel-value differencing uses the largest difference value between the other three pixels close to the target pixel to estimate how many secret bits will be embedded into the pixel.

The largest pixel value differencing between the other three pixels close to the target pixel is used to determine the amount of embedding data in the target pixel. If the pixel is in an edge area, more bits can be placed in the pixel than those in a smooth area. In order to enhance the image quality of the stego-image, we apply the optimal pixel adjustment process (OPAP) proposed by Chan et al to minimize the embedding error.

This method refer to the three neighboring pixel that have already finished insertion process to embed the secret message into the target pixel (Fig 2.1).

PLU

PX

PL

PU

Figure 2.1 A target pixel and three neighboring pixels

In the cover image, the target pixel PX with gray value gx, let gl, glu and gu be the gray value of its left PL pixel, upper-left PLU pixel and upper PU pixel, respectively. The secret data bit stream is embedded into the host-image by raster-scan order while the pixels located in the first row and first column are abandoned.

The gray value difference d is defined as

d=gmax - g min (1)

Where: g max = max(gl , gul, gu) and

gmin = min(gl,gul,gu)

Using Eqn (1), we judge whether the target pixel is included in an edge or a smooth area. The embedding capacity of the pixel depends on the value of d .Let n be the number of bits which can be embedded in the input pixel P x .The value n is calculated by:

n = 1, if 0≤ d≤1

log2 d, if d >1 (2)

As the image quality of the stego-image is degraded drastically when n > 4. If n > 4, we set n = 4.

A sub-stream with n bits in the embedding data is extracted and is converted to integer b .Then the new value g' x is computed as

g' x =g x - g x mod 2n + b

2.5 BPCS Steganography

The BPCS (Bit Plane Complexity Segmentation) technique is used to embed data into bitmap files. The ultimate goal is to embed as much data as possible into a cover image without detection by human perception or statistical analysis. In this section we introduce the concept of BPCS, which is the basic idea behind our proposed steganographic of algorithm.

Bit Plane Decomposition: Bit plane segmentation decomposes an image into a set of binary images, one for each bit position of the pixel values. For % bits image (there are 8 bits per pixel), 8 bit planes can be decomposed. Figure 2.3 illustrates bit plane decomposition of a sample image. The least significant bit plane would be a binary image consisting of the least significant bit of each pixel in the image. The next higher bit plane would consist of the next higher bit of each pixel, and so on. Such decomposition provides one efficient way to perform hiding information in digital images figure 2.3 of Bit plane decomposition. After the image is decomposed into its bit planes, the bits of one or more of the lowest bit planes will be replaced with secret information, and then recompose the bit planes back into an image. This decomposition process is central idea of BPCS, which are discussed next.

Figure 2.3 Bit plane decomposition

Complexity: To implement BPCS we need a binary complexity measure that can be applied to each region of each bit plane. There is no standard definition of image complexity. Most implementations of BPCS adopt a black-and-white border length to measure the image complexity. If the border is long, the image is complex, and if it is short, the image is simple. The total length of the black-and-white border is equal to the summation of the number of black-and-white-changes along the rows and along the columns in an image. Here we propose a measure of complexity based on the variance on each region of each bit plane. The embedding capacity without obvious image degradation can be improved if we limit our embedding in the higher planes to regions of the image that are fairly "complex"; i.e., where there is already a natural variation in pixel values in the local region, noise added to such regions through the embedding process is much less perceptible than noise added to flat regions. Using such a complexity measure, we can determine if each individual region of each bit plane is complex enough for embedding. Unlike the similar existent algorithms, our method uses the variance as a complexity measure, which permits to improve the hiding capacity and Peak Signal to Noise Ratio (PSNR) of the system.

2.6 Peak Signal to Noise Ratio (PSNR)

For the embedding efficiency factor, we consider peak-signal-to-noise-ratio (PSNR), which is a widely used objective measurement to avoid subjective evaluation for the degree of 328 similarities between an original image and stego image. The PSNR is defined as:

PSNR = 10 * log10 ( I2 max / MSE) db = 20 * log10 ( I max / √MSE) db (3)

Where, Imax is equal to 255 for grayscale images, and the mean squared error MSE [13] is defined to be:

MSE = (1/MN) * ∑(i=1) M ∑(j=1) N(׀CI(i,j) – SI(i,j) ׀) (4)

Where, M and N represent the number of horizontal and vertical pixels respectively of the cover(C) and stego(S) image.

As the mean squared error is the denominator in (3) that the smaller it is then the larger the PSNR will be, representing the fact that the stego image is similar to the original image. It is generally known that the distortion of the stego image is hard to detect by the human eyes as long as the PSNR value is larger than 29 dB. For the hiding capacity factor, we consider a bits-per-pixe1 (bpp) measurement to evaluate how many secret bits can be carried by a stego image.

2.7 Analysis to Steganography Methods

Table 1: Comparison of Proposed Staganography methods

Algorithm

Hybrid approach

Pixel difference

BPCS

Characteristics

Complexity program

Low

Moderate

High

Embedding Capacity

Low(1 bit)

High(<4 bits)

Highest (6 bits)

PSNR(Peak-Signal-to-noise ratio)

Highest

Moderate

Moderate

Embedding policy

Embed the information in one of the last 3 digit of darkest /brightest pixel

Compute difference value between the other three pixels close to the target pixel to estimate how many secret bits will be embedded into the pixel.

Check 8*8 bit plane has noisy information and embed the secrete data there.

As per the analysis of the studied algorithm, BPCS steganography method has good embedding capacity with high PSNR. So we are going for the modification of BPCS. The important factors, embedding efficiency and hiding capacity, are considered in order to evaluate the performance of a data hiding schemes.

As per our comparison on the steganography methods, that pixel difference and bit plane method, BPCS method has high embedding capacity but comparatively low PSNR. So in our proposed method, we choose the BPSC steganography method. We shall make some modification in such a way that we can increase embedding capacity as well as improve PSNR that will be discussed later.

Chapter3.

Proposed Method

3.1 Proposal

The proposed system model is shown in Figure 3.1. The host image (cover image) is enhanced by histogram equalization; while hidden source (secret information) is encoded using hybrid cryptography which is discussed in section3.2 and then applied compression method which will be explain in section3.3. After that embedded the secret information to the cover contrast enhance image using bit planes variance criterion as per section3.5. In hidden information encoder, each symbol of the message is mapped to 8x8 binary block before it be added to the image. One of the goals of the steganographical scheme is to make it difficult to guess the exact mapping between secret information and cover image.

Message

Encryption Algorithm

Stego key (Hybrid Cryptography)

Encrypted data

Compression method

Compressed data

Host Image

Steganography method

Decompression method

Decompressed data

Decryption Algorithm

Decrypted data

Message

Figure 3.1: Flowchart of proposed methodology

Due to that this algorithm is based on the image complexity on each bit plane, the secret key must include data about the exact mapping of secret information over the image. To determine if the complexity of determined region is enough for embedding, we use as reference a threshold value which is a function of the mean value of that region, and if the complexity of the region is greater or equal to this value, we deem that it is embeddable, and if not, we leave the region alone. Based on this idea, we can build a map with regions complex enough to embed information for each bit plane. So that more information can be embedded in the cover image and no one easily manage the attack to gain information.

At receiver site, first, the received stego-image is decomposed to bit planes. Again, we have to compute the complexity for each region on each bit plane and obtain a map with noisy regions. Of course, only those modified blocks of the image will hold the threshold used in the complexity computation process in the receiver. The obtained maps will contain the specific regions of the stego-image where the secret message was inserted. Once the blocks have been extracted, the decomposer and decoder will make an inverse process to recover the message in its original format.

3.2 Hybrid Cryptography

Generally we use either symmetric or asymmetric or hash cryptography. But avoid drawbacks of these method, we are using here hybrid cryptography which combination of symmetric and asymmetric cryptography.

DES: The Data Encryption Standard (DES) is the name of the Federal Information Processing Standard (FIPS) 46-3, which describes the data encryption algorithm (DEA). The DEA is also defined in the ANSI standard X3.92. DEA is an improvement of the algorithm Lucifer developed by IBM in the early 1970s. IBM, the National Security Agency (NSA) and the National Bureau of Standards (NBS now National Institute of Standards and Technology NIST) developed the algorithm. The DES has been extensively studied since its publication and is the most widely used symmetric algorithm in the world.

The DES has a 64-bit block size and uses a 56-bit key during execution (8 parity bits are stripped off from the full 64-bit key). DES is a symmetric cryptosystem, specifically a 16-round Feistel cipher. When used for communication, both sender and receiver must know the same secret key, which can be used to encrypt and decrypt the message, or to generate and verify a Message Authentication Code (MAC). The DES can also be used for single-user encryption, such as to store files on a hard disk in encrypted form. The brief DES algorithm is shown in figure 3.2.

Encryption

Parity Bits(64)

Decryption

Parity Bits(64)

Key Generation

Parity Bits(64)

64 bits Plaintext

64 bits Plaintext

Key with Parity Bits (64)

L0 R0(32)

Final Permutation

Initial Permutation

Cipher Key 56b

Shift left 28 b

Function

Shift left 28b

Function

Compression PBox

R16

L16

Swapper

Round1 Key 48b

Round1 Key 48b

.

.

.

.

.

.

.

.

. . …

Round16 Key 48b

Compression PBox

Shift left 28 b

Shift left 28b

L16

R16

Swapper

Function

Function

L16

R16

L16

Round16Key 48b

Initial Permutation

Final Permutation

64 bits Ciphertext

64 bits Ciphertext

Figure 3.2 Data Encryption Standard Algorithm

RSA: The RSA algorithm, named for its creators Ron Rivest, Adi Shamir, and Leonard Adleman, is currently one of the favorite public key encryption methods. RSA uses two exponents, e and d, where e is public and d is private. Suppose P is the plaintext and C is the ciphertext. Alice uses C = Pe mod n to create ciphertext C from plaintext P; Bob uses P = Cd mod n to retrieve the plaintext sent by Alice. The modulus n, a very large number, is created during the key generation process. Encryption and decryption both use modular exponentiation. Modular exponentiation is feasible in polynomial time using the fast exponentiation algorithm. However, modular is as hard as factoring the modulus, for which there is no polynomial algorithm yet. This means that Alice can encrypt in polynomial time (e is public), Bob also can decrypt in polynomial time (because he knows d), but Eve cannot decrypt because she would have to calculate the eth root of C using modular arithmetic. The RSA algorithm is explained in the figure 3.3.

Here is the algorithm:

1. Choose two large prime numbers p and q

2. n = pq

3. Ø (n) = (p-1) (q-1)

4. Select any number e, such that gcd (Ø (n),e)=1 ; 1<e<Ø(n)

5. Calculate d such that d=e-1mod Ø (n)

6. Encryption key (public key) is {e,n}

7. Decryption key (private key) is {d,n}

8. Encryption CT = PT e mod n

9. Decryption PT = CT d mod n

Receiver

Key Calculation in

G = <Z Ø(n),*,+)

Private(d) key

Decryption

Select p,q

N=p*q

Select e and d

(e,n)

To Public

Sender

(e,n)

Encryption

P = Cd mod n

C = Pe mod n

C

P

P

Figure 3.3 RSA Algorithm

RSA is one of the greatest public key cryptography algorithms which use prime numbers to encrypt and decrypt the data using private key and public key. Key sharing and key exchanging problem can be solved.

The advantages of hybrid encryption algorithm

• Using RSA algorithm and the DES key for data transmission, so it is no need to transfer DES key secretly before communication;

• Management of RSA key is the same as RSA situation, only keep one decryption key secret;

• Using RSA to send keys, so it can also use for digital signature;

• The speed of encryption and decryption is the same as DES. In other words, the time-consuming RSA just do with DES keys.

In our proposal, we shall using hybrid encryption algorithm, DES algorithm is used for data transmission because of its higher efficiency in block encryption, and RSA algorithm is used for the encryption of the key of the DES because of its management advantages in key cipher. Under the dual protection with the DES algorithm and the RSA algorithm, the data transmission will be more secure.

3.3 Compression Method

Data compression algorithms are used to reduce the redundancy and storage requirement for data. Data compression is also an efficient approach to reduce communication costs by using available bandwidth effectively. Over the last decade we have seen an unprecedented explosion in the amount of digital data transmitted via the Internet in the form of text, images, video, sound, computer programs, etc. If this trend expected to continue, then it will be necessary to develop a compression algorithm that can most effectively use available network bandwidth by compressing the data at maximum level. Along with this it will also important to consider the security aspects of the compressed data transmitting over Internet, as most of the text data transmitted over the Internet is very much vulnerable to an attack. So, we are presenting an intelligent, reversible transformation technique that can be applied to source text that improve algorithm ability to compress and also offer a sufficient level of security to the transmitted data.

Adaptive Huffman coding (also called Dynamic Huffman coding) is an adaptive coding technique based on Huffman coding. It permits building the code as the symbols are being transmitted, having no initial knowledge of source distribution, that allows one-pass encoding and adaptation to changing conditions in data[18].

The benefit of one-pass procedure is that the source can be encoded in real time, though it becomes more sensitive to transmission errors, since just a single loss ruins the whole code

Algorithm:

Code is represented as a tree structure in which every node has a corresponding weight and a unique number. Numbers go down, and from right to left. Weights must satisfy the sibling property, which states that nodes must be listed in the order of decreasing weight with each node adjacent to its sibling.

Thus if A is the parent node of B and C is a child of B, then W(A) > W(B) > W(C).

The weight is merely the count of symbols transmitted which codes are associated with children of that node. A set of nodes with same weights make a block. To get the code for every node, in case of binary tree we could just traverse the entire path from the root to the node, writing down (for example) "1" if we go to the right and "0" if we go to the left. We need some general and straightforward method to transmit symbols that are "not yet transmitted" (NYT). We could use, for example, transmission of binary numbers for every symbol in alphabet.

Encoder and decoder start with only the root node, which has the maximum number. In the beginning it is our initial NYT node. When we transmit an NYT symbol, we have to transmit code for the NYT node, then for its generic code.

For every symbol that is already in the tree, we only have to transmit code for its leaf node. For every symbol transmitted both the transmitter and receiver execute the update procedure:

If current symbol is NYT, add two child nodes to NYT node. One will be a new NYT node the other is a leaf node for our symbol. Increase weight for the new leaf node and the old NYT and go to step 4. If not, go to symbol's leaf node.

If this node does not have the highest number in a block, swap it with the node having the highest number, except if that node is its parent.

Increase weight for current node

If this is not the root node go to parent node then go to step 2. If this is the root, end.

Note: swapping nodes means swapping weights and corresponding symbols, but not the numbers.

ExampleDeveloping adaptive Huffman treeFigure 3.4 Example of Adaptive Huffman coding

Start with an empty tree.

For "a" transmit its binary code.

NYT spawns two child nodes: 254 and 255. Increase weight for root. Code for "a", associated with node 255, is 1.

For "b" transmit 0 (for NYT node) then its binary code.

NYT spawns two child nodes: 252 for NYT and 253 for leaf node. Increase weights for 253, 254, and root. Code for "b" is 01.

For the second "b" transmit 01.

Go to that leaf node, 253. We have a block of weights of 1 and the biggest number in the block is 255, so swap the weights and symbols of nodes 253 and 255, increase weight, go to root, increase weight for root.

Future code for "b" is 1, and for "a" is now 01, which reflects their frequency.

3.4 Contrast Enhancement

Histogram equalization is a technique for adjusting image intensities to enhance contrast.

Consider a discrete grayscale image {x} and let ni be the number of occurrences of gray level i. The probability of an occurrence of a pixel of level i in the image is

\ p_x(i) = p(x=i) = \frac{n_i}{n},\quad 0 \le i < L

L being the total number of gray levels in the image, n being the total number of pixels in the image, and px(i) being in fact the image's histogram for pixel value i, normalized to [0,1].

Let us also define the cumulative distribution function corresponding to px as

\ cdf_x(i) = \sum_{j=0}^i p_x(j),

which is also the image's accumulated normalized histogram.

We would like to create a transformation of the form y = T(x) to produce a new image {y}, such that its CDF will be linearized across the value range, i.e.

\ cdf_y(i) = iK

for some constant K. The properties of the CDF allow us to perform such a transform, it is defined as

\ y = T(x) = cdf_x(x)

JPEG example subimage.svg

JPEG example subimage - equalized.svg

Original

Equalized

img5

Figure 3.5 Histogram Equalization

3.5 Modified BPCS Steganography

The BPCS (Bit Plane Complexity Segmentation) technique is used to embed data into bitmap files. The ultimate goal is to embed as much data as possible into a cover image without detection by human perception or statistical analysis.

In BPCS, the noisy region of an image is located on each bit-plane as small pixel blocks those have noisy patterns. Each bit-plane of a container image is regularly divided into small square binary pixel blocks as illustrated in Figure 3.6. A binary pixel block can be regarded as one in a noisy region if it has a complex black-and-white pattern. Only such complex blocks are used for embedding.

Figure 3.6 Binary pixel blocks on bit-planes

Let P be a 2N × 2N size black-and-white image with black as the foreground area and white as the background area. W and B denote all-white and all-black patterns, respectively. We introduce two checkerboard patterns Wc and Bc, where Wc has a white pixel at the upper-left position, and Bc is its complement, i.e., the upper-left pixel is black shown in figure 3.7. We regard black and white pixels as having a logical value of "1" and "0", respectively. P is interpreted as P Wc = P*. Pixels in the foreground area have the B pattern, while pixels in the background area have the W pattern.

Figure 3.7 Illustration of each binary pattern (N=4)

Suppose that k out of M pixel borders lie between black and white pixels in a block, the complexity measure is then given by

α = k/M. (1)

Figure 3.8 A simple block

For example, α = 8/112 for the block in Figure 3.8. If α of a block is large, the block has many black-and-white borders inside, and we can regard it as complex. On the other hand, if α of a block is small, the block must be simple. The range of this measure is [0, 1]. A threshold value α0 is introduced to discriminate complex blocks from simple ones. A block B is regarded as complex if α (B) ≥ α0.Figure 5.5(a) and (b) show a simple block and a complex block in respect of α. The values of α are 20/112 and 72/112 respectively.

(a) (b)

Figure 3.9 A simple block and a complex block in respect of α

The arithmetic of BPCS steganography is as follows:

1) The carrier image is divided into 8 different Bit-Planes. All the bit-planes are divided into small pieces of the same size, which is called bit-plane blocks, such as 8 × 8.

2) Calculate the complexity α of every block. The complexity is defined as the amount of all the adjacent pixels that get different values (one pixel is 0, and the other is 1). The maximum possible value of the complexity is denoted as max C.

3) Setting the complexity threshold of the bit-plane block is max minAlpha, here α is a parameter.

The image complexity α is defined by the following.

Where, k is the total length of black-and-white border in the image. So, the value ranges over 0 ≤ α ≤ 1. (2)

(1) is defined globally, i.e., α is calculated over the whole image area. It gives us the global complexity of a binary image. However, we can also use α for a local image complexity (e.g., an 8 × 8 pixel-size area). The bit-plane block whose complexity is larger than minAlpha is used to embed secret information. The smaller the value of minAlpha, the more secret information can be embedded.

4) Secret information is formed into bit-plane blocks. The bit-plane block can replace the original one straightly if its complexity is greater than minAlpha. Yet, it needs to take conjugate processing with the checkerboard pattern block if the complexity is less than or equal to minAlpha, than take the new block replace the original one.

Enhancement of image contrast

Divide Cover Image into 8*8 different Bit-Plane Blocks5) Make a record of the blocks that have taken conjugate processing and this information also need to be embedded into the carrier. The embeddings of this extra information can not produce an effect on the embedded secrets, and it must be correctly picked up.

No

Yes

Embed the secret information in bit-plane block

Make the record of the blocks that have taken conjugate processing

Calculate the complexity of every block

Set the calculated complexity threshold of the bit-plane block to minAlpha of every block

BPBC > minAlpha

Consider next block

Take a conjugate processing with the check-board pattern block if embed block complexity < minAlpha and processed block replace with original one.

Embed the record information in last high complexity Bit-Plane Blocks

Figure 3.10 Flowchart of Modified BPCS steganography

The process of secret information extraction is simple. Firstly, pick up all the pieces of the carrier data whose complexity is greater than minAlpha, and then pick up the extra embedded information mentioned in step (5) to confirm the blocks that have taken conjugate processing. These blocks need take XOR operation with tessellated chock to get the recovery of secret.

The basic steganography uses for bit 0, 1, 2 and 3. For bit 4, 5, 6 and 7; a new technique shall be used as: Apart from basic values of alpha (that is minimum complexity threshold), we will maintain a new value (say gamma) that will indicate change in complexity from original image. For 4, 5, 6 and 7 bit planes, we will calculate alpha and if it is greater than minAlpha, then we will generate the bit pattern to be embedded from secret file and calculate alpha of the bit pattern as well. Now we will recalculate alpha for generated pattern and we will compare it with minAlpha, if smaller we will complex conjugate as in previous algorithm. Now we will calculate change in pattern from original image (i.e. changes in bits from original to modified image). If this value (gamma) is greater than minGamma, we will not hide data in that 8x8 blocks. If value is less than minGamma, we shall hide and use first two bits of block to indicate whether the bit pattern is conjugated and whether a valid data is indeed hidden or not. This way we can make use of entire image and increase the size of the data that can be hidden.

Next, a summary of the proposed algorithm is presented.

In the transmitter:

1. Encrypt the message, that sender want to send, using hybrid cryptography and then apply compression on the same.

2. Transform the secret information to equivalent form using blocks. Each block must have an 8 bit size.

3. Enhance the host image by histogram equalization method.

4. Transform every block to a binary 8x8-bit image.

5. Decompose the host image in its bit planes.

6. Divide each bit plane in 8x8-bit blocks and compute the complexity for each one. If the complexity is higher than a specific threshold value, consider the block as noisy (high complexity level).

7. Design a map with the noisy blocks. Replace the noisy blocks by the information blocks generated in Step 2 based on the insertion map.

8. Make conjugation operations on the low complex blocks to make it complex and store the block number as a record in last block.

9. For 4, 5, 6 and 8 bit plane blocks consider block difference between original and embedded, if less than specified threshold minGamma then replace that blocks by the information blocks generated in Step 2 based on the insertion map. Otherwise slip that block.

In the receiver:

1. Decompose the host image in its bit planes.

2. Divide each bit plane in 8x8-bit blocks and compute the complexity for each one. If the complexity is higher than a specific threshold value, consider the block as noisy.

3. Design a map with the noisy blocks.

4. Extract the noisy blocks from the image.

5. Obtain the record to find out conjugate block information.

6. For 4, 5, 6, 7 bit plane blocks, consider minGamma.

6. Use the same procedure as the transmitter; decompress and then decode these noisy blocks to get the original secret message.

Obviously, the original host image is not necessary to get secret message. This is because the unique useful information is contained in its noisy regions of the stego-image. In the steganography, the goal is to recover the inserted secret message, no to recover the host image in its original form.

Chapter4.

Conclusion

General methods can be hided number of bytes less than or equal to the number of pixels in cover image. Here we have done comparative study to find out better steganography method to keep the image quality high and to make the embedding capacity large at the same time.

As we apply hybrid approach of encryption, compression algorithms on message, it produces strong secure message which is hided in the image with steganography method. So the high strength produces to secrete message as it goes to number of rounds.

In some applications, the presence of the embedded data may be known, but without the customization parameters, the data is inseparable from the image. In such cases, the image can be viewable by regular means, but the data is tied to the image and can't readily be replaced with other data. Others may know the data is there, but without the customization parameters, they cannot alter it and still make it readable by the customized software.

Future work

We should convince that this steganography is a very strong information security technique, especially when combined with encrypted embedded data. Furthermore, it can be applied to areas other than secret communication. Future research will include the application to cover gray image other than 24-bit images, identifying and formalizing the customization parameters, and developing new applications.

Chapter5.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now