Survey Of Image Reconstruction Methods Computer Science Essay

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

Image Processing:

A digital image processing are processed to convert an image into digital form and perform some operation of information. An image import with optical scanner, analyzing and manipulate the image data compression and enhanced the image, in last output of process altered convert the images. There are three types of methods in image restoration of data error, noise and geometric distortions the scanning and play back operations. Another type image enhancement of image process for improve the information content of image intensity, color variations, density slicing, edge enhancement and digital enhancements and Image information extraction of one type of image processing to classify the pixels basis of the digital signatures in componenet of images, ratio, multispectral and change the detections of image processing methods.

Image Reconstruction:

Image reconstruction refers to process of recreate the continues image recover the problem of noisy place to constructed the original images. Image reconstruction failing images for noise reduced the image and debluring the image. Image reconstruct the radon transform functions the image matrix specified the direction of projections and use in 2dimensional, 3dimensional images reconstructed the projections of CT, MRI, PET, SPET.

Its method used in images is taken the samples to recover the missing lines detected and failed pixels reconstructed the imaging. How to used for image reconstruction, in various method using reconstruct the image in applied the algorithms of techniques. An digital image process an array of a single pixel value to represent are processed the information of image. A two dimensional distribute of image points are x ↔ y, 3 dimensional image data for reconstruct the pixel point value are x ↔ y ↔ z. Image reconstruction methods are Analytical image reconstruction, Iterative image reconstruction and Non iterative image reconstruction, Parametric and Nonparametric image reconstruction. Then analytical image reconstruction model of Back projection and Filtered Back Projection. Iterative reconstruction for Algebraic reconstruction methods are used to manipulate the images.

Application of Image Reconstructions:

Image reconstructions used in Biological Science, Meteorology / Satellite imaging, Material Sciences, Medical Science, Industrial inspection / Quality Control, Geology, Astronomy, Military, Physics and Chemistry , Earth science, archeology, Photography and nondestructive testing applications.

Image Reconstruction Methods:

Image Reconstruction methods are

Original Image

Image REconstruction

Back Projection

Analytical IR

Iterative IR

NonIterative IR

Parametric IR

Nonparametric IR

Analytical Image Reconstruction

Analytical reconstruction is a filtered backprojection algorithm are efficient (fast) and elegant are unable to handle complicated factors such as scatter. Filtered back projection has been used for reconstructions in x-ray CT and for most SPECT and PET reconstructions until recently used .

Back Projection

Back Projection is a way of recording to well the pixels of a given image fit the distribution of pixels.

Filtered Back Projection

Filtered backprojection (FBP) is one type of analytical image reconstructions. FBP is the standard method for reconstructing CT images and reducing noise to produce high quality of images at low radiation doses. It is the most common method used in topographic reconstruction clinical data. The method works on the concept of reconstruction from multiple projections. FBP are currently widely used on clinical CT scanners computational efficiency and numerical stability. An one or more FBP has Fourier Filtering Reconstruction

1D Fourier Transform

2D Fourier Transform

Now consider a two-dimensional Fourier transform of function µ 

\begin{displaymath}M(k, l) = \int_{-\infty}^\infty \int_{-\infty}^{\infty} \mu(x,y) e^{-2\pi i (k x + l y)} \,\textrm{d} x\,\textrm{d} y \end{displaymath}

Iterative Image Reconstruction

Iterative reconstruction for the iterative algorithms used in 2D and 3D reconstruct the images in image techniques. Computed tomography an image must be recounstrcted from the projections of and object.The commonfiltered back projection  it compares the original image projection data and updates the image base difference calculation of actual projection. Iterative advantages of the approach for improved insensitivity to noise and the capability of reconstruction.

Iterative algorithms for required two kinds of steps in: forward projection (image domain to projection domain) and backprojection (projection domain to image domain)

Statistics model of Image Reconstruction

The requirement that image reconstruction be completed in one step prevents full use of the statistical information. Iterative methods are more flexible. Consistency is obtained by finding the image model for which the residuals form a statistically acceptable random sample of the parent statistical distribution of the noise. The data model is then our estimate of the reproducible signal in the measurements, and the residuals are our estimate of the irreproducible statistical noise.

The three components of data fitting is first one for must be a fitting procedure to find the image model is done by minimizing a merit function, often subject to additional constraints. The Second one is must be tests of goodness of fit—preferably multiple tests—that determine whether the residuals obtained are consistent with the parent statistical distribution. And last one is would like to estimate the remaining errors in the image model. The each model clarify of those components of data fitting is, consider the familiar example of a linear regression. It might be a determine the regression coefficients by finding the values that minimize a merit function consisting of the sum of the squares of the residuals.

Maximum Likelihood

An image model I results in a data model M . The parent statistical distribution of the noise in turn determines the probability of the data given the data model p(D/M). The most common parent statistical distributions are the Gaussian distribution and the Poisson distribution. The noise in differentpixels is statistically independent, and the joint probability of all the pixels is the product of the probabilities of the individual pixels. The Gaussian probability is

p(D|I)=,

If correlations between pixels, p(D|I) is a more complicated function

The goal of data fitting is to find the best estimate of I such that p(D|I p(D|ˆI) is consistent with the parent statistical distribution. The maximum-likelihood method selects the image model by maximizing the likelihood function or, equivalently, minimizing the log-likelihood function . This method is known in statistics to provide the best estimates for a broad range of parametric fits in the limit in which the number of estimated parameters is much smaller than the number of data points. Most image reconstructions is nonparametric, i.e., the "parameters" are image values on a grid, and their number is comparable to the number of data points. The Maximum likelihood is not a good way to estimate the image and can lead to significant artifacts and biases.

Algebraic Reconstruction Technique (ART)

Algebraic reconstruction techniques (ART) are iterative Method of recovering objects from their projections. ART is claimed that by a careful adjustment of the order in which the collected data are accessed during the reconstruction procedures and also called relaxation parameters that are to be chosen in an algebraic reconstruction technique, ART can produce high-quality reconstructions with excellent computational efficiency.

Multiplicative Algebraic Reconstruction Technique (MART)

The MART technique is involves a multiplicative correction of the voxel intensity based on the ratio of the recorded pixel intensity Pi and the projection of voxel intensities from

where µ is a relaxation parameter typically chosen between and 2. Each voxel’s intensity is corrected to satisfy one projection or pixel at a time, with a single iteration being completed only after every projection has been considered. This method has been proven to converge to the maximum information based entropy solutions. which represents the most probable reconstruction based on the recorded projections.

Simultaneous Algebraic Reconstruction Technique (SART)

 simultaneous application of the error correction terms as computed by ART for all rays in a given projection. Image reconstruction can be done using filtered back projection method or iterative reconstruction technique [8]. In iterative reconstruction technique, the images to be reconstructed are usually represented in the form of system of linear equations. By solving this system of linear equation using iterative technique like SART image can be reconstructed. . Simultaneous Algebraic reconstruction Technique (SART) is an enhanced version of ART, which converges faster than ART .The mathematical foundation of ART was first proposed by Kaczmarz. SART is an improvement of the ART for quick convergence. The equation for iterative reconstruction of unknown using SART can be represented as

Where ‘y’ is the voxel’s values, λ is the relaxation factor, ‘v’ is the correction step, ‘b’ is the projection data, ‘T’ is the weight matrix where j=1: N. In the above equation Ti, + and T+, j can be expressed as

Where i= 1: M

Where j= 1: N

Simultaneous Iterative Reconstruction Technique (SIRT)

SIRT target a least squares solution to the line-of-sight integral equation, enabling simultaneous consideration of every projection in each iteration. The aim of such an algorithm is to remove the sensitive of the reconstruction to error in each projection ), requiring Ij to simultaneous satisfy all projections. The SIRT iteration is shown below:

In this case we have incremented the relaxation parameters in which reduces the correction in subsequent iterations as the solution is approached.

Adaptive Algebraic Reconstruction Technique (AART)

This technique is an extension to the additive ART algorithm that involves the adaptive adjustment of relaxation parameters during each stage of the reconstruction. In the basic additive ART algorithm the each voxels intensity is updated for each projection in each iteration as follows:

Where represents the relaxation parameter for each voxel for a given projection i and interation k. In standard ART this relation parameter is either a constant or at least the same for each voxel in a given iteration. In AART this relaxation parameter is instead adjusted for each voxel so that as each projections is considered, the voxels that have a larger intensity contribution to the ith projection receive the largest correction. This is done via the ratio of the intensity contribution of each voxel to the integration of this intensity contribution along the projection’s lines-of-sight , where,

and the relaxation parameter

NONITERATIVE IMAGE RECONSTRUCTION

Noniterative method for solving the inverse problem is one that derives a solutionthrough an explicit numerical manipulation applied directly to the measured datain one step. The advantages of the noniterative methods are primarily ease of implementation and fast computation.

Fourier Deconvolution

Fourier deconvolution is one of the oldest and numerically fastest methods of image deconvolution. The image can be determined using a discrete variant of the Fourier deconvolution as computed efficiently using fast Fourier transforms , Fourier-transform spectroscopy, and technique is used in speckle image reconstructions and the determination of galaxy redshifts, velocity dispersions, and line profiles.

Small-Kernel Deconvolution

Fast Fourier transforms perform convolutions very efficiently when used on standard desktop computers but they require the full data frame to be collected before the computation can begin. The great disadvantage when processing raster video in pipeline fashion as it comes in, because the time to collect an entire data frame often exceeds the computation time. Pipeline convolution of raster data streams is more efficiently performed by massively parallel summation techniques, even when the kernel covers as much as a few percent of the area of the frame.

The hardware terms of a field-programmable gate array (FPGA) or an applicationspecific integrated circuit (ASIC) can be much more efficient than a digital signal processor (DSP) or a microprocessor unit (MPU). FPGAs or ASICs available commercially can be built to perform small-kernel convolutions faster than the rate at which raster video can straightforwardly feed them, which is currently up to ~150 megapixels per second.

Pipeline techniques can be used in image reconstruction by writing deconvolutions as convolutions by the inverse H-1 of the point-response function I= H-1, which is equivalent to the Fourier deconvolution. But H-1 extends over the entire array, even if H is a small kernel.

Wiener Filter

The deconvolution of the data results in strong amplification of high-k noise. The problem is that the signal decreases rapidly at high k, while the noise is usually flat and does not decay with k. The Fourier transform of the data ˜D(k) is multiplied by a k -dependent filter Ф (k), and the product is transformed back to provide filtered data. Linear filtering is a particularly useful tool in deconvolution, because the filtering can be combined with the Fourier deconvolution to yield the filtered deconvolution,

Ф (k)

the optimal filter, which minimizes the difference between the filtered noisy data and the true signal, is the Wiener filter, expressed in Fourier space.A disadvantage of the Wiener filter is that it is completely deterministic and does not leave the user with a tuning parameter. Higher values result in more aggressive filtering, whereas lower values yield a smaller degree of filtering.

Wavelets

The Fourier transform is a very convenient way to perform deconvolution, because convolutions are simple products in Fourier space. The disadvantage of the Fourier spectral functions is that they span the whole image and cannot be localized. the Fourier transform is such that there are no functions that are perfectly narrow in both image space and Fourier space. The goal is to find a useful compromise.

Wavelet filtering is similar to Fourier filtering and involves the following as to Wavelet-transform the data to the spectral domain, attenuate or truncate wavelet coefficients, and transform back to data space. The wavelet filtering can be as simple as truncating all coefficients smaller than m, where is the standard deviation of the noise. Once the data have been filtered, deconvolution can proceed by the Fourier method or by small-kernel deconvolution. Of course, the deconvolution cannot be performed in wavelet space, because the wavelets, including the a trous wavelets, are not eigenfunctions of the point-response function. Wavelet filtering can also be combined with iterative image reconstruction.

Quick Pixon

The Pixon method is another way to obtain spatially adaptive noise suppression. We defer the comprehensive discussion of the Pixon method and its motivation A faster variant is the quick Pixon method, which applies the same adaptive Pixon smoothing to the data instead of to image models. This smoothing can be performed once on the input data, following which the data can be deconvolved using the Fourier method or small-kernel deconvolution.

The advantage of the quick Pixon method is its speed. The method is noniterative and consists primarily of convolutions and deconvolutions, the computation can be performed in pipeline fashion using smallkernel convolutions. This allows one to build special-purpose hardware to process raster video in real time at the maximum available video rates.

PARAMETRIC IMAGE RECONSTRUCTION

Simple ParametricModeling

Parametric fits are always superior to other methods, provided that the image canbe correctly modeled with known functions that depend upon a few adjustable parameters.The simplest parametric methods is a least-squares fit minimizing χ2, the sum of the residuals weighted by their inverse variances

.

Error Estimation

Fitting has two additional advantages: The minimum is a measure of goodness of fit, and the variation of the around its minimum value can be used to estimate the errors of the parameters. The distinction between "interesting" and "uninteresting" parameters, and the role they play in image error estimation. A convenient way to estimate the errors of a fit with p parameters is to draw a confidence limit in the p-dimensional parameter space, a hypersurface surroundingthe fitted values on which there is a constant value of . If = - min is the difference between the value of on the hyper surface and the minimum value found by fitting the data, then the tail probability α that the parameters would be found outside this hypersurface by chance is approximately given by a distribution with p degrees of freedom

.

Parametric fits often contain a combination of q "interesting" parameters and r = p - q uninteresting parameters. To obtain a confidence limit for only the interesting parameters, without any limits on the uninteresting parameters, one determines the q - dimensional hypersurface for which

The only proviso is that in computing for any set of interesting parameters q, is optimized with respect to all the uninteresting parameters. A special case is that of a single interesting parameter (q = 1). The points at which = m2 are then the m error limits of the parameter. In particular, the 1σ limit is found where .

Clean

Parameter errors are also important in models in which the total number of parameters is not fixed. Clean, an iterative method that was originally developed for radio-synthesis imaging is an example of parametric image reconstruction with a built-in cut-off mechanism. Multiple point sources are fitted to the data one at a time, starting with the brightest sources and progressing to weaker sources, a process described as cleaning the image. The Clean algorithm consists to steps in four, Start with a zero image and second to add your image a new point source at the location of the largest residual. The third step in fit the data for the positions and fluxes of all the point sources introduced into your image so far. Last step for return to the second step if the residuals are not statistically consistent with random noise. Clean has enabled synthesis imaging of complicated fields of radio sources even with limited coverage of the Fourier (u,v) plane.

NONPARAMETRIC IMAGE RECONSTRUCTION

The great power in performing high-quality and robust image reconstructions, the use of parametric methods is severely restricted by the requirement that explicit functions be identified with which to model the image. A general feature of such models is that the number of model values to be determined can be comparable to or exceed the number of data points. A nonparametric method accomplishes this by defining an image model on a grid of pixels equal in size to that of the data. The method must then by some means determine image values for all pixels in the image grid. In the worst case, each image value may be individually and independently adjustable. In iterative nonparametric methods that enforce no restrictions on image models are often no better at controlling noise than the noniterative methods. The iterative methods usually use the log-likelihood function as their merit function, despite its inadequacy for nonparametric fits, but they restrict its minimization in different ways.

Early Termination of the Fit

A nonparametric maximum-likelihood fit can result in zero residuals. The image and the data are defined on the same grid, and then a nonnegative point-response function is a nonsingular, square matrix, which has an inverse. The maximum-likelihood solution is therefore the one for which the residuals are identically zero, as in Fourier deconvolution. A set of zero residuals is hardly a statistically acceptable sample of the parent statistical distribution. One way to avoid letting the residuals become too small in an iterative fit is to terminate the fit before this happens.

Nonnegative Least-Squares

A simple constraint that greatly increases the performance of a maximum-likelihood method is to disallow negative image values are applied to a least-squares fit, this procedure is known as a nonnegative least-squares fit. Nonnegativity is certainly a necessary restriction for almost all images. A qualitative argument that supports this idea is that if the image contains both large positive and large negative fluctuations on length scales smaller than the width of the point-response function, then these fluctuations mutually cancel upon convolution with the point-response function.

The requirement that the image be nonnegative also increases resolution. The degree of possible subpixel resolution depends on the signal- to -noise ratio and the width of the point-response function. Half-pixel resolution, and even quarter-pixel resolution, can often be obtained. Procedures that impose nonnegativity include changes of variable and simply setting negative values to zero after each iteration.

Van Cittert

The van Cittert method is one of the earliest and simplest iterative methods for image reconstruction problems in which the data and image are defined on the same grid. The iteration begins with the zeroth-order image I(0) at all grid points and iterates from there according to

,

where Q = 1 − αH, and 1 is the identity kernel. The van Cittert method therefore exhibits noise amplification just as do the Fourier-based methods, and the iteration must be terminated prior to convergence. The art of applying the van Cittert is in choosing a value of the parameter α and establishing a stopping criterion, so that the computation time, noise amplification, and degree of recovered resolution are acceptable. Although the convergence of the van Cittert iterations can be slow, solutions can be obtained especially quickly when the point-spread function is centrally peaked and relatively narrow.

Landweber

The Landweber method often modify the procedure to avoid negative image values, which yields the projective Landweber method. An another iterative scheme is,

,

the superscript T denotes the transpose operation, and α is a small positive parameter. This method is designed to minimize the sum of the squares of the residuals by insuring that the next change in the image, , is in the direction of the negative of the gradient of with respect to I. using the method have found that it often initially produces a good solution but thereafter begins to diverge.

Richardson-Lucy

The Richardson-Lucy method was developed specifically for data comprising discrete, countable events that follow a Poisson distribution. The nonlinear log-likelihood function is minimized iteratively using multiplicative corrections:

Lucy algorithm is flux conserving, maintains image nonnegativity, and decreases the log-likelihood function in each iteration, at least if one takes only part of the step indicated.

Conjugate-Gradient

A technique is the conjugate-gradient method starts from some initial image I(0), where it computes the negative gradient (negradient) g(0) of the log-likelihood function with respect to the image and sets the initial conjugate-gradient direction h(0)=g(0). A sequence of negradients g(k) and conjugate-gradient directions h(k). The minimum of the log-likelihood function along the conjugate-gradient direction h(k). And the position of the minimum it computes the next negradient g(k+1). A set of new conjugate-gradient direction to a linear combination of the old conjugate-gradient direction and the new negradient

h(k+1) = g(k+1) + kh(k).

The coefficient k is chosen to optimize convergence.

where the sums are over all the image points. The stopping criterion for the conjugate-gradient minimization is similar to that of the slower methods. The most effective way to impose nonnegative solutions is to modify the conjugate-gradient method as at each iteration of the conjugate gradient minimization, steps in compute the negradient. Another step check the negradient components of all the pixels whose image values are zero and set the negradient components to zero if they are negative, next one compute the conjugate-gradient direction in the usual way. To find the minimum along the conjugate gradient direction without regard to the sign of the image and truncate all negative image values to zero, thereby jumping to a new solution. Last one to go back to the first step and continue with the next conjugate gradient iteration as though no truncation took place. The conjugate-gradient direction analytically and proceed there in one step for nonlinear log-likelihood functions it is necessary to search iteratively for the minimum, which requires that the log-likelihood function be computed several times along the conjugate-gradient direction.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now