The Existing Compression Model

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction:

Need of Image Coding: The various types of digital video have a very high bitrates which makes their transmission through their assigned channels is very difficult. Also an entertainment video with moderate frame rate and dimensions would require large bandwidth and huge storage capacity which is not available on CD-ROM. Thus delivering consumer quality video is impossible on compact disc. Similarly the data transfer rate required by a video telephony system is greater than the bandwidth available over the plain old telephone system. Even if the fiber optic cable is available for high bandwidth, the per-byte-cost of transmission would be very low before it would be feasible to use it for HDTV. Even if the problem of storage and transportation is solved but the processing power needed to manage to enormous data would make the receiver hardware very expensive.

In recent year’s significant gain in storage, transmission and processor technology is achieved, so that the use of digital video is possible in day today life. This reduction in bandwidth has been made possible by advances in compression techniques.Since image has a very large information size, the image data size poses a problem when an image is stored or transmitted. Compression (encoding) of an image reduces the data size by removing the redundancy of the image or manipulating the values on levels that such manipulations are hard to visually recognize. Conventionally, as one of still image encoding methods, JPEG (Joint Picture Experts Groups) internationally recommended by the ISO and ITU-T is known. In JPEG, several kinds of encoding schemes are specified in correspondence with images to be encoded and applications. However, in its basic scheme except for a reversible process, an image is segmented into 8×8 blocks, the blocks undergo the discrete cosine transform (DCT) , and transform coefficients are appropriately quantized and are then encoded. In this scheme, since the DCT is done in units of blocks, so-called block distortion is produced, i.e. block boundaries are visible when transform coefficients are coarsely quantized and encoded. On the other hand, recently, encoding schemes based on the wavelet transform have been extensively studied, and various encoding schemes using this transform have been proposed. Since the wavelet transform does not use block division, it does not suffer any block distortion and image quality a high compression ratio is superior to that of JPEG[1].

Image compression - An image is nothing but a 2-D signal processed by the human visual system. Compression is a reversible process which minimizes the number of bits to represent an image .Then this compressed image can store or transmit efficiently. The main objective of image compression is to reduce the irrelevance and redundancy of image data bits. Image compression is a process of minimizing the number of bytes of an given image without disturbing the image quality to an unacceptable level. The effect of this reduction in file size allows more images to be stored in a given amount of pen drive or memory space. It also reduces the time required for images to be sent over the Internet or downloaded from Web pages. When we speak about image compression, there are generally two different classes, the lossless compression and the lossy compression. Lossy compression on image data does not produce an exact replica of an original image. Rather it gives approximation of an image data. Lossy compression methods most often depends on transforming spatial image domain into a domain that reveals image components according to their relevance. This make it possible to apply coding methods that take advantage of data redundancy in order to suppress it.

A general characteristic of most of the images is that the neighboring pixels are correlated. Therefore, image contains a large amount of redundant information. Then the next task is to find out less correlated representation of the image. There are two fundamental components of compression: redundancy and irrelevancy reduction. Redundancy reduction means to remove duplication from the signal source (image/video information). Greater the redundancy within the data the more successful the compression of the data, Digital video contains a great amount of redundancy and thus is very suitable for compression. Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver that means the Human Visual System (HVS). The main objective of Image compression is that to reduce the number of bits needed to represent an image by removing the spatial and spectral redundancies as much as possible. Since we will focus only on still image compression, we will not think about temporal redundancy [2].

For still image compression, the `Joint Photographic Experts Group' or JPEG standard [4] has been established. The JPEG is originated by ISO (International Standards Organization) and IEC (International Electro-Technical Commission). A device (software or hardware) that compresses data is often know as an encoder or coder, whereas a device that decompresses data is known as a decoder. A device acts as both i.e. coder and decoder is known as codec. The performance of these coders generally degrades at low bit-rates mainly because of the underlying block-based Discrete Cosine Transform (DCT) scheme. More recently, the wavelet transform has emerged as a cutting edge technology, within the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. The following fig. 1 shows the existing image compression model[2].

Fig 1 Existing Compression Model

Now a days the main important requirement is to get the compressed image at low bit rate but with improved PSNR. There are various technologies for image compression. Even though they have shown better performance, have some inefficiency. There are some technical papers listed below which describes about image compression schemes.

A.Said, (1996) was presented a new and different implementation based on set partitioning in hierarchical trees (SPIHT), which provides even better performance than previously reported extension of EZW that surpassed the performance of the original EZW. The image coding results, calculated from actual file sizes and images reconstructed by the decoding algorithm, are either comparable to or surpass previous results obtained through much more sophisticated and computationally complex methods. In addition, the new coding and decoding procedures are extremely fast, and they can be made even faster, with only small loss in performance, by omitting entropy coding of the bit stream by arithmetic code.

A Quafi, (2006) was studied a new approach of images coding by Shapiro algorithm (Embedded Zero tree Wavelet algorithm or EZW) is proposed. This approach the modified EZW (MEZW) distributes entropy differently than Shapiro's and also optimizes the coding. This can produce results that are a significant improvement on the PSNR and compression ratio obtained by Shapiro, without affecting the computing time. These results are also comparable with those obtained using the SPIHT and SPECK algorithms.

C. Wang (2008) was described the various image coding algorithms based on wavelet transform, SPIHT algorithm is a very effective and computationally simple technique for image compression. Although embedded encoding was first achieved by using the wavelet transform, similar approaches can be taken with other transforms. All phase biorthogonal transform (APBT) is a new transform which can be used in image compression instead of the conventional DCT. In this paper, they present an APBT-based embedded image coder. The experimental results show that the proposed APBTSPIHT algorithm improves the performance at low bit rates, both in terms of PSNR and visual quality while keeping low computational complexity.

C Chang (2007) was proposed a direction-adaptive DWT (DA-DWT) that locally adapts the filtering directions to image content based on directional lifting. With the adaptive transform, energy compaction is improved for sharp image features. A mathematical analysis based on an anisotropic statistical image model is presented to quantify the theoretical gain achieved by adapting the filtering directions. The analysis indicates that the proposed DA-DWT is more effective than other lifting-based approaches. Experimental results report a gain of up to 2.5 dB in PSNR over the conventional DWT for typical test images. Subjectively, the reconstruction from the DA-DWT better represents the structure in the image and is visually more pleasing.

C. Hsieh, (2000) was proposed a simple and efficient embedded image compression algorithm. For embedded image compression, the Embedded Zerotree Wavelet (EZW) method was proposed by Shapiro. But the EZW method takes a great deal of bit-rate to record and to arrange the significant location in a 2-D image compression. To get better bit-rate and image quality: they propose a new method that is called Embedded Important-oriented-tree Wavelet (EIW). EIW bit stream will be obtained from the bit stream of Discrete Wavelet Transform (DWT) by using the correlation of relation location between subbands coefficient. By replacing the significant part of EZW bit stream with the bit stream of EZW, they don't need to sort and can save a lot of bit-rate. Simulation results showed that the proposed method obtains better low bit-rate than EZW method and also produces a fully embedded coding for 2-D image.

Frederick W. (2000) was presented a variant of the SPIHT image compression algorithm called No List SPIHT (NLS) is presented. NLS operates without linked lists and is suitable for a fast, simple hardware implementation. NLS has a fixed predetermined memory requirement about 50% larger than that needed for the image alone. Instead of lists, a state table with four bits per coefficient keeps track of the set partitions arid what information has been encoded. NLS sparsely marks selected descendant nodes of insignificant trees in the state table in such a way that large groups of predictably insignificant pixels are easily identified and skipped during coding passes. The image data is stored in a one dimensional recursive zig-zag array for computational efficiency and algorithmic simplicity. Performance of the algorithm on standard test, images is nearly the same as SPIHT.

F.Wheeler (2000) was discussed for fast simple hardware implementation they proposed algorithm of no list SPIHT. Instead of lists they used a state table of four bit per coefficient to keep track of set partitioning and what information has been encoded. The performance of algorithms on standard test images is same as that of SPIHT.

J. ZhiGang (2009) was discussed a image coding algorithm with lower memory and higher speed. The algorithm further takes Human Visual System into consideration and modifies SPIHT algorithm is according to the characteristic of weighted wavelet coefficients. Experiment result shows that, under the situation of low bit-rate, the new algorithm can prove the visual effect of reconstructed image at a certain degree. In addition, because the new algorithm has low memory requirement, the coding speed accelerates a lot. It has a very broad appliance future in the occasion of high desire of memory and speed.

J. Akhtar, (2006) was described, different wavelets have been used to perform the transform of a test image and their results have been discussed and analyzed. The analysis has been carried out in terms of PSNR (peak signal to noise ratio) obtained and time taken for decomposition and reconstruction. SPIHT coding algorithm is considered as a basic standard in compression field using wavelet transform. In addition to wavelet analysis for simple decomposition, analysis of SPIHT coding algorithm in terms of PSNR for different wavelets is also carried out here. This analysis will help in choosing the wavelet for decomposition of images as per their application.

J M. Shapiro (1933) was presented the embedded zerotree wavelet algorithm (EZW) is a simple, image compression algorithm, having the property that the bits in the bit stream are generated in order of importance, yielding a fully embedded code. The embedded code represents a sequence of binary decisions that distinguish an image from the "null" image. Using an embedded coding algorithm, an encoder can terminate the encoding at any point thereby allowing a target rate or target distortion metric to be met exactly. Also, given a bit stream, the decoder can cease decoding at any point in the bit stream and still produce exactly the same image that would have been encoded at the bit rate corresponding to the truncated bit stream. In addition to producing a fully embedded bit stream, EZW consistently produces compression results that are competitive with virtually all known compression algorithms on standard test images. The EZW algorithm is based on four key concepts: 1) a discrete wavelet transform or hierarchical subband decomposition, 2) prediction of the absence of significant information across scales by exploiting the self-similarity inherent in images, 3) entropy-coded successive-approximation quantization, and 4) universal lossless data compression which is achieved via adaptive arithmetic coding.

J. Wang (2010) was described wavelet analysis is internationally recognized up to the minute tools for time-frequency. It gets an unprecedented development based on Fourier analysis and plays an important role in the signal processing especially in image compression. By analyzing the relations of the coefficients between each son block and the whole image, we can find the index position of the sub-band of each son block in the same sub-band of the whole image is same as that of the block in the origin image. Using the conclusion, an image compression-based on the Set Partition in Hierarchical Tree (SPIHT) algorithm is principally researched and analyzed in this paper, its algorithm idea and steps is given. Finally, the experiment results show it is a good algorithm for wavelet coefficients compression.

J. Malý, (1997) was represented an efficient way of reducing storage requirements. This paper proposes an implementation of discrete-time wavelet transform based image codec using Set Partitioning of Hierarchical Trees (SPIHT) coding in the MATLAB environment.

J. Yang, (2007) was proposed an image coding scheme using 2-D anisotropic dual-tree discrete wavelet transform (DDWT). First, they extend 2-D DDWT to anisotropic decomposition, and obtain more directional subbands. Second, an iterative projection-based noise shaping algorithm is employed to further specify anisotropic DDWT coefficients. At last, the resulting coefficients are rearranged to preserve zero-tree relationship so that they can be efficiently coded with SPIHT. Experimental results show that their proposed scheme outperforms JPEG2000 and SPIHT at low bit rates despite the redundancy of DDWT.

J.Lian1, (2006) was described a new listless zerotree image compression algorithm is presented. Besides, there is still some redundancy in the zerotree structure of SPIHT. In this paper, an improved zerotree structure and a new coding procedure are adopted, which improves the reconstructed image quality. Moreover, the lists in SPIHT are replaced by flag maps and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. Experimental results show that the algorithm is more effective and efficient compared to SPIHT.

L.Tang, (2009) was described a novel MDC algorithm based on Discrete Cosine Transform (DCT) and the Set Partition in Hierarchical Trees (SPIHT) is proposed in this paper. Different from the commonly used DCT algorithm, all the transformed coefficients are reshaped into the wavelet decomposition structure to facilitate the use of the SPIHT algorithm. Then the direction-based information is used to form the three different channels. By using different bit rates to encode the information from three different orientations, i.e., vertical, horizontal and diagonal directions, the redundancy is introduced into the three channels. Every channel contains the hybrid information from three different directions. Experimental results show the advantages of this novel algorithm and the theoretical analysis has also been studied.

L. Jing, (2009) was studied the classic SPIHT algorithm need to maintain three lists to store temporarily the image’s zerotree structure and significance information, which represents a major drawback for hardware implementation because a large amount of memory is needed to maintain these lists. Besides, there is still some redundancy in the zerotree structure of SPIHT. Memory requirement of the algorithm is reduced significantly in LZC because of the replacement of lists by flag maps, but the performance of codec is also lowered. In this paper, a new listless zerotree image compression algorithm is presented. An improved zerotree structure and a new coding procedure are adopted, which improves the reconstructed image quality. Moreover, the lists in SPIHT are replaced by flag maps and lifting scheme is adopted to realize wavelet transform, which lowers the memory requirements and speeds up the coding process. The matching experiments of reconstructed standard stereo images demonstrate the feasibility and effectiveness of the proposed method.

M. Antonini, (1992) was proposed a new scheme for image compression taking into account psychovisual features both in the space and frequency domains; this new method involves two steps. First, they use a wavelet transform in order to obtain a set of biorthogonal subclasses of images; the original image is decomposed at different scales using a pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains constant the number of pixels required to describe the image. Second, according to Shannon’s rate distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. Furthermore, to encode the wavelet coefficients, they propose a noise shaping bit allocation procedure which assumes that details at high resolution are less visible to the human eye. Finally, in order to allow the receiver to recognize a picture as quickly as possible at minimum cost, they present a progressive transmission scheme. It is shown that the wavelet transform is particularly well adapted to progressive transmission.

M. Pooyan, (2005) was presented a novel approach for wavelet compression of electrocardiogram (ECG) signals based on the set partitioning in hierarchical trees (SPIHT) coding algorithm. SPIHT algorithm has achieved prominent success in image compression. Here they use a modified version of SPIHT for one dimensional signals. They applied wavelet transform with SPIHT coding algorithm on different records of MIT-BIH database. The results show the high efficiency of this method in ECG compression.

R. A. DeVor 1992) was introduced a new theory for analyzing image compression methods that are based on compression of wavelet decompositions. This theory precisely relates a) the rate of decay in the error between the original image and the compressed image as the size of the compressed image representation increases (i.e., as the amount of compression decreases) to b) the smoothness of the image in certain smoothness classes called Besov spaces. Within this theory, the error incurred by the quantization of wavelet transform coefficients is explained. Several compression algorithms based on piecewise constant approximations are analyzed in some detail. It is shown that if pictures can be characterized by their membership in the smoothness classes considered here, then wavelet-based methods are near optimal within a larger class of stable transform- based, nonlinear methods of image compression. Based on previous experimental research on the spatial-frequency intensity response of the human visual system, it is argued that in most instances the error incurred in image compression should be measured in the integral (L') sense instead of the mean-square (L 2) sense.

R. R. Shivel (2000) was described a family of list free tree set scanning (LIFTS) algorithm related to Shaprio’s embedded zero tree wavelet coding (EZW) and to Said & Pearlman’s Set Partitioning in Hierarchical trees (SPIHT) algorithm.

S. Chatterji (2002) was studied interaction of DWT and the memory hierarchy is examined. Then modified the structure of DWT computation and the layout of image data to improve cache and TLB locality and shown the significant performance improvements of DWT over a baseline implementation.

S. Lawson (2002) was described the demand for higher and higher quality images transmitted quickly over the Internet has led to a strong need to develop better algorithms for the filtering and coding of such images. The introduction of the JPEGZOOO compression standard has meant that for the first time the discrete wavelet transform (DWT) is to be used for the decomposition and reconstruction of images together with an efficient coding scheme. The use of wavelets implies the use of subband coding in which the image is iteratively decomposed into high- and low-frequency bands. Thus there is a need for filter pairs at both the analysis and synthesis stages. This paper aims in tutorial form to introduce the DWT, to illustrate its link with filters and filter banks and to illustrate how it may be used as part of an image coding algorithm. It concludes with a look at the qualitative differences between images coded using JPEGZOOO and those coded using the existing JPEG standard.

Sadashiva, (2011) was described been the storage, manipulation, and transfer of digital images. The files that comprise these images, however, can be quite large and can quickly take up precious memory space on the computer’s hard drive. In multimedia application, most of the images are in color. And color images contain lot of data redundancy and require a large amount of storage space. In this work, they are presenting the performance of different wavelets using SPIHT algorithm for compressing color image. In this R, G and B component of color image are converted to YCbCr before wavelet transform is applied. Y is luminance component; Cb and Cr are chrominance components of the image. Lena color image is taken for analysis purpose. Image is compressed for different bits per pixel by changing level of wavelet decomposition. Matlab software is used for simulation. Results are analyzed using PSNR and HVS property. Graphs are plotted to show the variation of PSNR for different bits per pixel and level of wavelet decomposition.

S.Cho (2002) was presented compressed video bit streams require protection from channel errors in a wireless channel. The 3-D set partitioning in hierarchical trees (SPIHT) coder has proved its efficiency and its real-time capability in compression of video. A forward-error-correcting (FEC) channel (RCPC) code combined with a single automatic-repeat request (ARQ) proved to be an effective means for protecting the bit stream. There were two problems with this scheme: 1) the noiseless reverse channel ARQmay not be feasible in practice and 2) in the absence of channel coding and ARQ, the decoded sequence was hopelessly corrupted even for relatively clean channels. In this paper, we eliminate the need for ARQ by making the 3-D SPIHT bitstream more robust and resistant to channel errors. They first break the wavelet transform into a number of spatio–temporal tree blocks which can be encoded and decoded independently by the 3-D SPIHT algorithm. This procedure brings the added benefit of parallelization of the compression and decompression algorithms, and enables implementation of region-based coding. They demonstrate the packetization of the bitstream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness. Then they encode each packet with a channel code. Not only does this protect the integrity of the packets in most cases, but it also allows detection of packet-decoding failures, so that only the cleanly recovered packets are reconstructed. In extensive comparative tests, the reconstructed video is shown to be superior to that of MPEG-2, with the margin of superiority growing substantially as the channel becomes noisier. Furthermore, the parallelization makes possible real-time implementation in hardware and software.

U. Bayazit (2001) was proposed several low complexity algorithmic modifications to the SPIHT (Set Partitioning in Hierarchical Trees) image coding method]. The modifications exploit universal traits common to the real world images. Approximately 1-2 % compression gain (bit rate reduction for a given mean squared error) has been obtained for the images in their test suite by incorporating all of the proposed modifications into SPIHT.

W. Chien (2006) was described DCT and modifies the SPIHT algorithm to encode DCT coefficients. The algorithm represents the DCT coefficients to concentrate signal energy and proposes combination and dictator to eliminate the correlation in the same level subband for encoding the DCT-based images. The proposed algorithm also provides the deblocking function in low bit rate in order to improve the perceptual quality. This work contribution is that the coding complexity of the proposed algorithm for DCT coefficients is just close to JPEG but the performance is higher than JPEG2000. Experimental results indicate that the proposed technique improves the quality of the reconstructed image in terms of both PSNR and the perceptual results close to JPEG2000 at the same bit rate.

W.-K.Lin (1998) was presented a zerotree coding method for color images that uses no lists during encoding and decoding, permitting the omission of the lists requirement in Said and Pearlman's Set Partitioning In Hierarchical Trees (SPIHT) algorithm. Without the lists, the memory requirement in a VLSI implementation is reduced significantly. This coding algorithm is also developed to reduce the circuit complexity of an implementation. The experimental results show only a minor reduction of PSNR values when compared with the PSNR values obtained by the SPIHT codec illustrating well the trade-off between memory requirement and hardware simplicity.

Y.-h. Wu, (2010) was studied No List SPIHT (NLS) algorithm has been improved, and a fast parallel SPIHT algorithm is proposed, which is suitable to implement with FPGA. It can deal with all bit-planes simultaneously, and process in the speed of 4 pixels/period, so the encoding time is only relative to the image resolution. The experimental results show that, the processing capacity can achieve 200MPixels/s, when the input clock is 50MHz, the system of this paper need 2.29ms to complete lossless compression of a 512×512×8bit image, and only requires 1.31ms in the optimal state. The improved algorithm keeps the high SNR unchanged, increases the speed greatly and reduces the size of the needed storage space. It can implement lossless or lossy compression, and the compression ratio can be controlled. It could be widely used in the field of the high-speed and high- resolution image compression.

Z. Xiong,(1999) was presented a study of the performance difference of the discrete cosine transform (DCT) and the wavelet transform for both image and video coding, while comparing other aspects of the coding system on an equal footing based on the state-of-the art coding techniques, for still images, the wavelet transform outperforms the DCT typically by the order of about 1 dB in peak signal-to-noise ratio. For video coding, the advantage of wavelet schemes is less obvious. They believe that the image and video compression algorithm should be addressed from the overall system viewpoint: quantization, entropy coding, and the complex interplay among elements of the coding system are more important than spending all the efforts on optimizing the transform.

Z. Lu, (2000) was studied a wavelet electrocardiogram (ECG) data codec based on the set partitioning in hierarchical trees (SPIHT) compression algorithm is proposed in this paper. The SPIHT algorithm] has achieved notable success in still image coding. They modified the algorithm for the one-dimensional case and applied it to compression of ECG data. Experiments on selected records from the MIT-BIH arrhythmia database revealed that the proposed codec is significantly more efficient in compression and in computation than previously proposed ECG compression schemes. The coder also attains exact bit rate control and generates a bit stream progressive in quality or rate.

All above papers give description about different image compression schemes. I have used the concept of lifting wavelet scheme for wavelet decomposition to enhance the speed and modified SPIHT to improve image quality (i.e. to improve PSNR).



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now