Lifting Based Dwt Scheme Computer Science Essay

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

The DWT IP core was designed by using Synopsys ASIC design flow. First, the code was written in VHDL and implemented on the FPGA using a 32x32 random image. Then, the code was taken through the ASIC design flow. For the ASIC design flow, an 8x8 memory considered to store the image. This architecture enables fast computation of DWT with parallel processing. It has low memory requirements and consumes low power. By using the same concepts which are mentioned above are useful in designing the Inverse Discrete Wavelet Transform (IDWT).

The overall system architecture has been developed by implementing the key components like Lifting based Discrete Wavelet Transform, Shift and Add-multipliers, BZ-FAD Multipliers for reducing complexity and increasing the performance. Meanwhile the Discrete wavelet transforms and inverse wavelet transform has been implemented for image compression process. Similarly in order to reduce the number of iterations the lifting mechanism has been implemented.

The overall system implementation and functions have been mentioned as below:

5.1 Lifting Based DWT Scheme

The top level architecture for 1D DWT is presented in Fig. 13a and Fig. 13b. Input X is decomposed into multiple sub bands of low frequency and high frequency components to extract the detailed parameters from X using multiple stages of low pass and high pass filters. The sub band filters are symmetric and satisfy orthogonal property. For an input image, the two 1D DWT computations are carried out in the horizontal and vertical directions to compute the two level decomposition. The inverse DWT process combines the decomposed image sub bands to original signal. The reconstruction of image is possible owing to the symmetric property and inverse property of low pass and high pass filter coefficients.

Input x (n1, n2) is decomposed to four sub-components YLL, YLH, YHL and YHH. This results in a one level decomposition. The YLL sub-band component is further processed and is decomposed to another four sub-band components, thus forming two-level decomposition. This process is continued as per the design requirements till the requisite quality is obtained. Every stage of DWT requires LPF and HPF filters with down sampling by 2. Lifting based DWT computation is widely being adopted for image decomposition. In this work, we propose a modified architecture based on BZFAD multiplier [26, 27] to realize the lifting based DWT.

Lifting scheme is one of the techniques that are used to realize DWT architecture [28]. Lifting scheme is used in order to reduce the number of operations to be performed by half and, filters can be decomposed into further steps in lifting scheme. The memory required and also computation is less in the case of the lifting scheme. The implementation of the algorithm is fast and inverse transform is also simple in this method. The block diagram for lifting scheme [28] is shown in Fig. 14.

Figure 13a Image Decomposition

Figure 13b Image Decomposition

The z-1 blocks are for delay; α, β, γ, δ, ζ are the lifting coefficients and the shaded blocks are registers. 9/7 filter has been used for implementation, which requires four steps for lifting and one step for scaling. The input signal xi is split into two parts: even part x2i and odd part x2i+1. Thereafter, the first step of lifting is performed given by the equations (1) and (2).

di1 = α (x2i + x2i+2) + x2i+1 ...(1)

ai1 = β (di1 + di1-1) + x2i ...(2)

The first equation is predict P1 and the second equation is update U1. Then the second lifting step is performed resulting in equations (3) and (4):

di2 = γ (ai1 + ai1+1) + di1 ...(3)

ai2 = δ (di2 + di2-1) + ai1 ...(4)

The third equation is predict P2 and the fourth equation is update U2. Thereafter the scaling is performed in order to obtain the approximation and detail coefficients of DWT as given in equations (5) and (6).

ai = ζ ai2 = G1 ....(5)

di = di2 ∕ ζ = G2 ....(6)

The equations (5) and (6) are respectively scales G1 and G2. The predict step helps determine the correlation between the sets of data and predicts even data samples from odd. These samples are used for updating the present phase. Some of the properties of the original input data can be maintained in the reduced set also by construction of a new operator using the update step. The lifting coefficients have constant values of -1.58613, -0.0529, 0.882911, 0.44350, -1.1496 for α, β, γ, δ, ζ respectively. It may be observed by these equations, that the computation of the final coefficients requires 6 steps. Data travels in sequence from stage 1 to stage 6, introducing 6 units of delay. To speed up the process of computation, modified lifting scheme is proposed and realized.

Figure.14 block diagram for lifting scheme

5.2. ARITHMETIC BUILDING BLOCKS FOR LIFTING SCHEME IMPLEMENTATION

High–speed multiplication has always been a fundamental requirement of high performance systems. Multiplier structure is one of the processing elements which consumes the maximum area, power and also causes delay. Therefore, there is a need for high-speed architectures for N-bit multipliers with optimized area, speed and power. Multipliers are made up of adders in order to reduce the Partial Product logic delay and regularize the layout. To improve regularity and compact layout, regularly structured tree with recurring blocks and rectangular-styled tree by folding are proposed at the expense of complicated interconnects [29]. The present work focuses on multiplier design for low power applications such as DWT by rapidly reducing the partial product rows by identifying the critical paths and signal races in the multiplier. The focus of the design has been to optimize the speed, area and power of the multiplier that form the major bottleneck in lifting based DWT [30]. High–speed multiplication has always been a fundamental requirement of high performance systems. Multiplier structure is one of the processing elements which consumes the maximum area, power and also causes delay. Therefore, there is a need for high-speed architectures for N-bit multipliers with optimized area, speed and power. Multipliers are made up of adders in order to reduce the Partial Product logic delay and regularize the layout. To improve regularity and compact layout, regularly structured tree with recurring blocks and rectangular-styled tree by folding are proposed at the expense of complicated interconnects [29]. The present work focuses on multiplier design for low power applications such as DWT by rapidly reducing the partial product rows by identifying the critical paths and signal races in the multiplier. The focus of the design has been to optimize the speed, area and power of the multiplier that form the major bottleneck in lifting based DWT [30].

5.2.1 Shift and Add Multiplier

In shift and add based multiplier logic, the multiplicand (A) is multiplied by multiplier (B). If the register A and B storing multiplicand and multiplier respectively are of N bits, the shift and add multiplier logic requires two N bit registers, an N bit adder and an (N+1) bit accumulator. It also requires an N- bit counter to control the number of addition operation. In shift

Figure 15: The Architecture of the Conventional Shift-and-Add Multiplier

and add logic, the LSB bit of multiplier is checked for 1 or 0. If the LSB bit is 0, then the accumulator is shifted right by one bit position. If the LSB bit is 1, then the multiplicand is added with the accumulator content and the accumulator is shifted right by one bit. The counter is decremented for every operation; the addition is performed until the counter is set to zero, which is indicated by the Ready signal. The product is available in the accumulator after N clock cycles. Fig. 15 shows the block diagram of the conventional multiplier using shift and add logic, which generates a partial product (PP). B(0) is generally used to select A or 0 as is appropriate.

5.2.2. Modified BZ-FAD Multiplier

As discussed in shift and add logic earlier, if the LSB position is 1, then the accumulator is added with the multiplicand. If the accumulator contains more number of 1s than 0s, the adder has to add 1 and, this triggers the Full adder block within the adder. We know that the power dissipation is due to switching activity of input lines. Thus, whenever the input or output changes, the power is switched from Vdd to Vss, thus consuming power. In order to reduce the power dissipation, it is required to reduce switching activity in the I/O lines. BZ-FAD logic based multiplier [31] reduces the switching activity and thus reduces the power dissipation.

In shift and logic operation, the counter keeps track of the number of cycles, thus controlling the multiplication operation. In a binary counter, we know that the output bit change occurs in more than one bit. For example, if the current counter value is 3 (11 in binary) and changes to 4 (binary 100), there are three bit changes occurring. This causes switching activity, and the same can be reduced by replacing the binary counter by a ring counter. In a ring counter, at any given point of time, only one bit change occurs, thus reducing switching activity and power dissipation.

Another major source of power dissipation in shift and add logic is switching. For every bit value "0" of the multiplier, a shift operation is performed; thus all the bits in the accumulator are shifted by one bit position. This causes switching and hence more power is dissipated. In BZ-FAD logic, if the LSB bit is 0, then the shift operation is bypassed and a zero is introduced at the MSB, thus there is no shifting of accumulator content. In other words, if the LSB is zero, the accumulator is directly fed into the adder and there is no addition, but a zero is introduced by the control logic which is the same as right shift operation. The architecture of this multiplier is shown in Fig. 16. In the BZ-FAD, the control activity of ring counter, latch and bypass logic is realized using NMOS transistors, which introduces delay. The parasitic capacitance of NMOS transistors also increases the load capacitance and thus increases power dissipation.

Figure 16: Low power multiplier architecture

In order to reduce power dissipation, the transistor logic is replaced by MUX logic having ideal fan in and fan out capacitances. With MUX based logic, the control signals can be suitably controlled to reduce the switching activity as they are enabled only when required, based on the inputs derived from the ring counter. However, the design requires more number of transistors than it should and thus increases the chip area. We have also used the ripple carry adder which has the least average transition per addition among the look ahead, carry skip, carry-select and conditional sum adders to reduce power dissipation. Various multipliers are modeled in HDL and are analyzed for their performances and the results are tabulated for comparison.

Next section discusses the comparison results of these multiplier algorithms [32].

5.3. Discrete wavelet transform and Inverse Discrete wavelet transform implementation

DWT has traditionally been implemented by convolution. Such an implementation demands both a large number of computations and a large storage features that are not desirable for either high-speed or low-power applications. Recently, a lifting-based scheme that often requires far fewer computations has been proposed for the DWT [33] [34]. The main feature of the lifting based DWT scheme is to break up the high pass and low pass filters into a sequence of upper and lower triangular matrices and convert the filter implementation into banded matrix multiplications. Such a scheme has several advantages, including "in-place" computation of the DWT, integer-to-integer wavelet transform (IWT), symmetric forward and inverse transform, etc. Therefore, it comes as no surprise that lifting has been chosen in the upcoming implementations.

The proposed architecture computes multilevel DWT for both the forward and the inverse transforms: one level at a time, in a row-column fashion. There are two row processors that compute the high pass and low pass filter outputs as shown in Fig. 17. Four column processors operate on the row processed outputs. The outputs generated by the row and column processors divide the input image into four sub bands of LL, LH, HL and HH. These sub bands are stored in the memory modules for further processing. The memory modules are divided into multiple banks to accommodate high computational bandwidth requirements. The proposed architecture is an extension of the architecture for the forward transform that was presented earlier. A number of architectures have been proposed for calculation of the convolution-based DWT. The architectures are mostly folded and can be broadly classified into serial architectures, where the inputs are supplied to the filters in a serial manner and, parallel architectures, where the inputs are applied to the filters in a parallel manner.

A design methodology for lifting based DWT has been evolved by us that reduces the memory requirements and communication among processors when the image is broken up in to blocks. For a system that consists of the lifting-based DWT transform followed by an embedded zero-tree algorithm, a new interleaving scheme that reduces the number of memory accesses has been proposed. Finally, a lifting-based DWT architecture has been developed as shown in Figure 17, which is capable of performing filter operation with one lifting step, i.e., one predict and one update step. The outputs are generated in an interleaved fashion.

Figure 17-D Lifting-based DWT

The lifting scheme is represented by the following equations of the 1-D DWT:

h(i) =x(2i+1)+α(x(2i)+x(2i+2)) …(7)

l(i)=x(2i)+β(h(i)+h(i-1)) …(8)

The 2-D DWT is a multilevel decomposition technique that decomposes into four sub bands such as hh, hl, lh and ll. The mathematical formulae governing the 2-D DWT are as follows:

hh(i, j) = h(2i +1, j) +α (h(2i, j) + h(2i + 2, j)) …(9)

hl(i, j) = h(2i, j) + β (hh(i, j) + hh(i −1, j)) …(10)

lh(i, j) = l(2i +1, j) +α (l(2i, j) + l(2i + 2, j)) …(11)

ll(i, j) = l(2i, j) + β (lh(i, j) + lh(i −1, j)) …(12)

Similarly, the 2-D IDWT is defined as follows:

l(2i,j) = ll(i,j)-β(lh(i,j)+lh(i-1,j)) …(13)

l(2i+1,j) = lh(i,j)-α(L(2i,j)+l(2i+2,j)) …(14)

h(2i,j) = hl(i,j)-β(hh(i,j)+hh(i-1,j)) …(15)

h(2i+1,j)= hh(i,j)-α(h(2i,j)+h(2i+2,j)) …(16)

x(i,2j) = l(i,j)-β(h(i,j)+h(i,j-1)) …(17)

x(i,2j+1) = h(i,j)-α(x(i,2j)+x(i,2j+2)) …(18)

The Discrete Wavelet Transform and the Inverse Discrete Wavelet Transform Core have been coded in Verilog and synthesized using Synopsys Design Compiler. The two RTL Cores operate at a maximum clock frequency of 200 MHz. The design of DWT and IDWT are checked for testability. The timing and power reports are obtained using the Primetime. The architectures for DWT and IDWT perform in (4N2 (1−4− j ) + 9N)/6 computation time, where N is number of samples. The total power consumption of the DWT/IDWT processor is ~0.367mW. The area of the designed architecture using 0.13 micron technology is 112 X 114 μmm2, and the maximum frequency of operation reported is 200 MHz for Discrete Wavelet Transform. For the IDWT, whose architecture is presented in Fig. 18, the area and the maximum frequency reported is 112 X 114 μmm2 and 200 MHz respectively.

Figure 18-D Lifting based IDWT

5.4. 3D DWT Architecture

Design and VLSI implementation of high speed, low power 3D wavelet architecture is targeted on video coding application.

Flexible hardware architecture is designed for performing 3D Discrete Wavelet Transform. The proposed architecture uses new and fast lifting scheme which has the ability of performing progressive computations by minimizing the buffering between the decomposition levels. The 3D wavelet decomposition is computed by applying three separate 1D transforms along the coordinate axes of the video data. The 3D data is usually organized frame by frame. A single frame has rows and columns as in the 2D case, x and y direction often denoted as "spatial co-ordinates", whereas for the video data, a third dimension of "time" is added (z-direction). The input data is a set of multiple frames each consisting of N rows and N columns. Hence the input data can be denoted as, where N is an integer. The 3D DWT can be considered as a combination of three 1D DWT in the x, y and z directions as shown in Fig. 19. A preliminary work in the DWT processor design is to build 1D DWT modules, which are composed of high-pass and low-pass filters that perform a convolution on filter coefficients and input pixels. After a one-level of 3D discrete wavelet transform, the volume of frame data is decomposed into HHH, HHL, HLH, HLL, LHH, LHL, LLH and LLL signals as shown in Fig. 19.

The arithmetic blocks adopted in the design of 1D/2D DWT are extended to the design of 3D DWT as well. One of the major changes in the 3D architecture is the intermediate memory stages that are required for reordering of 1D and 2D output samples for the computation of 3D samples. Fig. 10 shows the 3D architecture with intermediate memories. For the computation of 3D-DWT, 1D-DWT has been performed first on the video frame data of size 8x8x8 data sets. Input data are fed to the first stage of the DWT, where the 8 bit shift register is used. It reads data from the memory and applies serial data into the DWT module. The first stage of the DWT is a split stage, where it divides the input data into even and odd samples. The next stage of the DWT is the predict stage. The even samples are multiplied by the predict factor and the results added to the odd samples to generate the detailed coefficients. These coefficients are computed by the predict step, multiplied by the update factors and are added to even samples to get the coarse coefficients. This process is continuous.

Figure 19 One-level 3D DWT structure

Figure 20: 3D DWT architecture using intermediate memory



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now