Fast Parallel Thinning Algorithm

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

1. INTRODUCTION
It is well known that the general problem of pattern recognition lies in the effectiveness and efficiency of extracting the distinctive features from the patterns. The effectiveness and the efficiency of extracting the characteristic features from patterns is a well-known general problem of pattern recognition. One great approach to recognise certain types of digital patterns is the stroke analysis method, which is specialized in recognizing alphanumeric characters and ideographs. It should be noted that different kinds of distortion complement the hardware or software strokes thinned. Every thinning algorithm produces a different type of distortion. {pdf TY Zhang and CY Suen}

A binary digitized picture can be defined with a matrix IT in which each pixel IT(i, j) will be either 1 or 0. The pattern will consist of those pixels that have value 1 whereas 0 represents all pixels of the background. Each stroke 
in the pattern will be more than one pixel thick. Iterative transformations are applied to the IT matrix point by point concurring to the values of a small set of neighboring points. It is assumed that the neighbors of the point (i, j) are (i- 1,j),(i- 1,j+1),(i,j+1),(i+1,j+1),
(i + 1, j), (i + 1, j - 1), (i, j - 1), and (i - 1, j - 1), as shown in Table … .

In parallel picture processing, the new value given to a point at the nth iteration will depend on its own value as well as those of its eight neighbors at the (n-1)th iteration, so that all picture elements can be processed simultaneously. It is assumed that a 3 x 3 pixel window is used, and that each pixel is connected with its eight neighboring pixels. The algorithm explained in this section requires only simple computations.

P9 (i-1, j-1)

P2 (i-1, j)

P3 (i-1, j+1)

P8 (i, j-1)

P1 (i,j)

P4 (i,j+1)

P7 (i+1,j-1)

P6 (i+1,j)

P5 (i+1,j+)

Table … Positions of the 8 neighbors around the pixel of interest in a 3x3 window.

3. THINNING ALGORITHM


The method for extracting the skeleton of a picture consists of removing all the contour points of the picture except those points that belong to the skeleton. we divide each iteration into two sub-iterations in order to preserve the connectivity of the skeleton,.

In the first sub-iteration, the contour pixel P1 is deleted from the pattern if the following conditions are satisfied:

2 ≤ B(P1) ≤ 6

A(P1)= 1


P2*P4*P6 = 0

P4*P6*P8 = 0

where A(P1) is the number of 01 or 10 patterns in the ordered set P2, P3, P4, P5, P6, P7, P8, P9 that are the eight neighbors of P1 table …, and

B(P1) represents the number of nonzero neighbors of P1, that is,

B(P1)= P2+ P3+ P4+P5+P6+P7+P8+ P9.

If any condition is not satisfied, e.g., the values of P2, P3, P4, P5, P6, P7, P8, P9 as shown in Figure 2, then

A(P1) = 2

Therefore, P1 will not be deleted from the picture.

In the second sub-iteration, only conditions (c) and (d) will change (Figure 3) to the following:

(c') P2*P4*P8 = 0

(d') P2*P6*P8 = 0

and the rest will remain the same.
 By conditions (c) and (d) of the first sub-iteration, it will be shown that the first sub-iteration removes only the southeast boundary points and the northwest corner points which do not belong to an ideal skeleton. The proof for the first sub-iteration is given, that is, the points to be deleted satisfy conditions:

(c) P2*P4*P6 = 0

(d) P4*P6*P8 = 0

The solutions to the set of the above two equations will be P4
= 0 or P6 = 0 or else P2 = 0 and P8 = 0. In which case the point P1, that has been removed, may be a southeast boundary point or a northwest corner point. Equally, one can prove that the deleted point P1 from the second sub-iteration may be a northwest boundary point or a southeast corner point. For example, the processing results of the character "H" by both sub-iterations are shown in Figure …(a) and …(b). The points marked by
"." are those that have been removed. The final result can be seen at Figure. ….(c).

As a result, with the aid of condition (a), the endpoints of a skeleton line (centerline) are preserved. Also, by condition (b), the algorithm prevents the deletion the points that lie between endpoints of a skeleton line, as shown in Figure …. The iterations will continue until no more points can be removed.

A flowchart of the proposed thinning algorithm is 
shown in Figure …. Initially, the original picture is 
stored in matrix called IT and a counter C will be set to 0. The 
result of the processed picture is stored back in matrix IT. In order to
 save memory space, only two matrices, IT and M, are used in our computation.

Figures 7(a)-7(c) show the results obtained by our algorithm for a Chinese character "@," a letter "B," and a digital "moving body," respectively. Skeleton points are marked by "*," or "@," and all those points that have been deleted in the thinning process are marked by "-."

The ‘Fast Marching Method’

Introduction.

The ‘Fast Marching Method’ (FMM) I a simple computing technique of the time needed for a front to reach the destination of one discrete net. If that front is a closed curve of a 2D object and the lattice is a pixel grid, the algorithm assigns each pixel a value that corresponds to the time needed for the curve to reach that pixel (add figure 1). In the case of 3D images, the lattice is a voxel. (Baertzen pdf1)

The method.

The FMM can be described as a set if evolutions of fronts. Assuming that those fronts evolve by motion to a normal direction, the speed can vary but it can never be negative. At any point, the Eikonal equation can describe the fronts motion. [pdf1]

(EQ. 1)

T is the fronts arrival time to the point x and F>=0 represents the speed of the front to the point x.

Because the method is quite generic, to simplify things, we will assume that F=1 everywhere and we will focus on 3D rectangular voxel lattices that are isotropic. The end result will be the shortest path of the boundary to all other points of the lattice.

The algorithm.

The method works inside out from an initial condition. In order to compute the distance field of a point, a single voxel should be utilised. However, the initial condition can be formed by multiple voxels.

Initially, all voxels that form the primary condition are "Stationary" in order to compute the distances of all the neighbours. All voxels that are not stationary but have already calculated their distances are called "Narrow Band" voxels. After each iteration, the voxel with the lowest computed distance will be stationary from now on and all the distances of neighbouring voxels is recomputed.

As a result, we can see that all stationary voxels can be utilised for the computation of neighbouring voxel and are never recomputed. Thus, we can see that the algorithm evolves continuously with a front of narrow band voxels being the initial condition.

The important data structure the algorithm utilises is a Binary Heap. Each element of the binary heap represents the distance value of a pointer to a single voxel. At this point evoke that the element with the lowest (or largest) value remains on top in the binary heap. As a result, the binary heap will aid us to find the shortest path. In detail:

Initialization. : Primarily we set all the starting voxels as Stationary. Every stationary voxel is six-connected to its neighbours and for each connected neighbours we compute the distances using information provided by the stationary voxels. The exact process will be discussed on the following section. All neighbouring voxels are identified as narrow band voxels and are added to the binary heap. All the above are done before the main loop.

Main Loop. : Firstly we find the narrow band voxel that has the smallest value and that is positioned on the top of the binary heap. That voxel then is identified as stationary, i.e. consider the computed distance value, for every non-stationary neighbour, compute the distance value, set the neighbour as a narrow band voxel and change its value in the binary heap. In some cases, the neighbouring voxel may already be identified as narrow band. In that case we have to recomputed the distance value and change its position to the heap in order to redirect the new value. To conclude, we loop back to the start in order to find the new lower distance value that corresponds to a narrow band voxel.

In figure … we can see the dataflow chart that corresponds to the high level psudo-code.

Implementation Details.

One needs to be able to track down voxels by the corresponding distance values stored in the binary heap or else "heap elements". According to (Baertzen and Sethian pdf1, pdf2) each narrow band voxel in the lattice must hold a pointer to a heap element thus linking them together. Nevertheless, the pointer it self is not enough as heap elements are prone to changing their position if they are recomputed. For that reason, all these pointers of those heap elements need to be updated every single time the binary heap changes. For all the above we can conclude that both the binary heap elements and the lattice data structures can never be separated from a software engineering point of view.

As a result, one could very easily create one list to store the distance values and a second value that will hold a transformation of their ordering. The elements of the transformation list point to elements of the of the distance values list and vice versa {add pdf 9 prposed by pdf 1}. In specific once a value is inserted in the binary heap list the position of this value is never changed the only thing that changes are its transformations. Thus, binary heap pointer elements of a lattice can remain stable nor does the binary heap needs access to the elements of the lattice.

How Distances are computed.

Using the Eikonal equation we are capable of computing the distance values. In other words, the distance values of the narrow band voxels need to be calculated in order to approximate the gradient equal to 1/F.

(EQ. 2)

From the above equation we can very easily recall that the value T is the symbol used to represent the arrival time of the front and F is the symbol used for the representation of the speed. According to Sethian and Baertzen {add pdf 9 proposed by pdf 1} the next equation is proposed for the calculation of the squared length of the gradient (as borrowed from the laws of hyperbolic conversions)

(EQ. 3)

VA is the symbol used to represent the distance value that needs to be computed whereas VB, VC, VD, VE, VF, VG are the symbols representing all neighbouring voxels within the six connected neighbourhood. In figure … we can see the exact representation of the six-connected voxel.

Add Figure 5

We can now use (3) to solve (2) which results to the following equation:

(EQ. 4)

That equation can be solved if we look at each term in the form of

We can clearly now see that it would be easier to solve (4) by using the smaller of the two values VB and VC for

To add more in this part, only stationary values are utilised. If none of the values VB and VC are stationary, the term will be removed from the equation. It is possible to use non-stationary values in the in the calculations, but the quality of the solution will be detrimental as tests have shown Baetzen (pdf 1). In fact the use of non-stationary values can very easily lead to situations in which the voxel that will be used to update another voxel that might have already been used to update itself. Nevertheless, Sethian et all {pdf 13 of pdf 1} utilise non-stationary values in order to obtain higher accuracy in a different implementation of the algorithm that will be discussed in detail in the following section.

With all the above in mind we can now easily create the quadratic equation. If we consider VB, VE and VF as stationary, and VB < VC, VE < VD, VF < VG, the equation will look as follows

(EQ. 5)

The solution we are looking for (if there are two) will be the one that has the largest value. The reason of looking for the larger value comes from the fact that VA has to be greater in value form all other three as they are marked as stationary. In addition to that if there are two solutions one can easily find out that the smallest in value solution cannot fulfil the requirements of the condition.

Other Implementations of the FMM.

High Accuracy Fast Marching Method.

The precision of the FMM does leave something requirements that ideally need to be met. In figure … we can see a very simple 2D example in which the front originates from the stationary (black) vertex that has been set the value 0. The two other stationary vertices have already calculated the distance values while the white vertex is currently being updated. From EQ 4., one can very easily find out that the distance value of the white vertex should have been the large (in value) solution to (Voxel Units) which is .

Add figure 6 pdf 1

Further more, it is very easy to realise that the correct distance value is , which eventually results to a difference of 0.3vu. This sort of error can be caught effortlessly by observing the lack of curvature understanding the FMM has for the front. If the front had a more linear texture the distance value would have been errorless and exact. From that we can see that the problem will be far worse when the distance values of a curvature boundary conditions are computed using the FMM.

Lets explore this issue a bit deeper with the aid o f a simple experiment. We calculated the distance value of all voxels to a certain point within a maximum radius of 20vu. From the radius we can calculate that the mean error will be 0.89vu with a maximum error of 1.48vu. By pre-computing all the distance values of the 26-connected neighbours (6-connected neighbours contains all neighbour voxels within a distance value of 1vu, 26-connecte neighbours contains all neighbour voxels within a distance value of vu) the results are better with a mean error of 0.73vu and a maximum error of 1.24vu.

To understand better the differences between the normal FMM and FMMHA Baertzen pdf1 changed the notations as the normal FMMs one sided derivatives are computed with the aid of forward and backward differences. More specific:

For which and represent the standard notations for forward and backward differences. G is the voxel lattice that in this case we assume it has a unit value. The approximated x of the partial squared derivative T will be

Similarly we can compute the partial derivatives with respect to y and z thus concluding to an approximated squared gradient length of

(EQ. 6)

Which amounts to EQ. 3. As a result we reach the normal FMM. But the main difference between the two implementations (FMM and FMMHA) is that those of the second order approximation replace the partial derivatives calculated by the first order and . So we have:

From the above second order equations we get polynomial coefficients that are different from those coming from the first order and a much more stable response from the system. In order to utilise this system, the voxels with a distance value of 2vu, value smaller than those at 1vu distance, must be stationary at all times as an example:

.

In case that the above two conditions are not met the algorithm will use the already calculated first order estimates.

Even then, with a few small modifications, the function responsible for re-computing the distance values is altered in order to utilise the higher accuracy version (EQ. 7)

(EQ. 7)

This method is almost similar to the normal FMM. We now analyse all three terms (x, y, z) analogically to the original FMM. Lets focus on the first term. For example, lets consider as stationary and pick from the left argument of the first max term. With and stationary we get the following equation

(EQ. 8)

At equation 8 the three dots represent the addition of similar terms coming from other max terms of equation 4. If is greater in value than or is not stationary we must not add but .

By completing those phases, the second order polynomial can be formed which will allow us to solve it analytically. Again if the solutions are two in number, the larger in value should be picked.

It is also possible for the coefficients to be written in a more compact way in order for the notation to be further simplified (EQ. 6):

(EQ. 9)

With

(EQ. 10)

FMM

FMMHA

Average Error

0.00467565 vu

0.000496425 vu

Max Error

0.120639 vu

0.0270829 vu

Marching Time

1.4 sec

1.62 sec

Table … FMM and FMMHA error and marching time comparison matrix. (PDF1)

The Shifted Grid Fast Marching Method

The SGFMM is an FMM variation proposed by (M. Sabry Hassouna,pdf2) and (Danielson and Lin {citation 29}). The basic idea behind this algorithm is to calculate the distance values at half-grid positions. Thus, the distance value will be reliant on the marching direction.

The arrival time computation of a point at a known position with the new system to another neighboring point will be derived using the optimal control theory. As a result, the solution is not able to utilize higher order finite difference schemes. In 2D, (M. Sabry Hassouna,pdf2) and (Danielson and Lin {citation 29}) proposed two models as solutions.

The first one utilizes a 4-connected neighbor system, which exports results similar to the FMM. On the other hand, the second one, which is 8-connected neighbor solution, gives better results (compared to those of the FMM). Both models share the same idea in which either four quadrants or eight octants divide the neighborhood around the point x. Because we know all the points of each quadrant/octant we are capable to compute the arrival time at point x. We finally assign that point the minimum distance value over all quadrant/octants. Nevertheless when SGFMM is applied in 3D models, no improvements can be reported when compared to the FMM.

The Group Marching Method

The GMM is another modified FMM version in which the advances happen in a manner of grouped grid points. Thus, the solution is calculated simultaneously rather than calculating all narrow band results. As a result, the GMM reduces the computational complexity of the FMM to O(n) without loosing any of its accuracy. (pdf2)

The method works as follows:

First a group of points G gets isolated from the narrow band in such a way that the points’ travel times are unable to alter each other during the update process.

Then the times of the points that are neighbors to the grid G are computed and then become narrow band after their travel.

Finally, the points G are tagged as known.

The Untidy Fast Marching Method (UFMM)

On another method L. Yatzin, A. Bartesaghi and D. Sapiro {citation 31 pdf2} (pdf2) reduc the complexity of FMM to O(n) in order to improve its computational efficiency. The method implements a special data structure for the narrow band points called untidy priority queue whose insert/delete operations cost O(1) instead of a sorted heap that would cost O(logn). The queue is implemented with the use of a dynamic circular array. For each arrival time value after it has been calculated, it is then quantized and will from now on be used as an index pointer of the dynamic array. Each entry (or otherwise called bucket) of the circular array holds a list of points that are similar T values. The T value is used to calculate the Eikonal equation when the grid point is selected, while the quantization is only used in the queue to place the grid point. As a result, the wrong selection order can only generate errors. The authors have also proven that the errors introduced due to a wrong selection order are at the same levels as the traditional FMM.

MULTISTENCILS FAST MARCHING METHODS

All of the related methods so far ignore information that are provided by diagonal points and therefore suffer from great mathematical errors along diagonal directions. Diagonal information of a point x can be used in two ways. One way, is by using one stencil only that is centered at a point x and is always aligned with the natural coordinates system. The coordinate system will then be rotated numerous times to overlap the lattice at diagonal grid points. The second way, is by utilizing several stencils that are centered at the point x and covering all its diagonal neighboring points. The gradient can then estimated by using directional derivatives.

It is also worth saying that the use of diagonal stencils to enhance the accuracy of solving PDEs is commonly used in practice. The work presented in this part combines multiple stencils and directional derivatives within the FMM with the result being the improved accuracy of solving the Eikonal equation on Cartesian domains.

Single-Stencil Multiple Rotations of Coordinate System

Lets consider a 2D lattice like the one in Figure …a. Assume that the Cartesian coordinate system has been rotated by an angle such that it crosses the lattice at grid points that are diagonal with respect to the original coordinate system p1 , p2 , q1 , and q2. At any point of the new coordinate system we are able to relate it to a point in the original one

(EQ. 11)

The partial derivatives of T with respect to x and y can be extracted from the chain rule

(EQ. 12)

Substituting equation 11 into 12 gives us

(EQ. 13)

By squaring both sides of equation 13, one can easily see that

(EQ. 14)

which represent the Eikonal equation.

Therefore, at system with the rotated coordinates, can be calculated from the spatial derivatives along and . By solving the Eikonal equation along the xy-coordinate system once and along the -coordinate system once again, the two solutions for the arrival time at x will be obtained, from which the smaller in value will that will satisfy the FMM causality relationship will be selected. Nevertheless, this technique is restricted to isotropic grid spacing, where . In any other case, the system rotated coordinates will overlap the lattice at points that are not places in that grid.

2D Multistencils Fast Marching Method

Lets consider the stencil Sv that crosses a 2D Cartesian domain at the grid points p1, p2, q1, and q2 as shown in 
Figure. 1b. Let the unit vectors and be along and , individually, and U1 and U2 will be the directional derivatives along and , one-to-one, which will be given as

(EQ. 15)

(EQ. 16)

thus, allowing us to rewrite it as

(EQ. 17)

Therefore,

(EQ. 18)

(EQ. 19)

(EQ. 20)

Because

(EQ. 21)

we have

(EQ. 22)

If represents the angle among the unit directional vectors, then RRT can be given as

(EQ. 23)

And

(EQ. 24)

By substituting equation 24 into 22, we are able to obtain a closed-form expression for the gradient of the arrival time along an arbitrary stencil, which looks like:

(EQ. 25)

The first-order approximation of the directional derivative Uv that obeys the viscosity solution of:

(EQ. 26-3)

is given by

(EQ. 27)

where T(x), T1, and T2 are given by

(EQ. 28-5)

and xv is the known grid point at which Tv is minimum.

Also, the second-order approximation of the directional derivative Uv that obeys the viscosity solution of equation 26 is given by

(EQ. 29)

where T1 and T2 are given by (10) and (11

(EQ. 30-10)

(EQ. 31-11)

while xv is still the known grid point at which Tv is minimum.

In 2D, two stencils are used (S1 and S2). S1 covers the nearest neighbor points, whereas S2 covers the diagonal points. To simplify we assume an isotropic grid spacing (o then ).

The S1 stencil is aligned with the natural coordinate of the system as shown in Figure 2a. Because , , hence, equation 25 will become

(EQ. 32)

For a first-order calculation of the directional derivative, we solve the next equation:

(EQ. 33)

On the other hand, for the second-order approximation of the directional derivative the following equation is solved:

(EQ. 34)

The S2 stencil is aligned with the diagonal neighbors as shown in Fig. 2b. Similar as before, for the first-order approximation of the directional derivative, the next equation is calculated:

(EQ. 35)

However, for the second order approximation of the directional derivative, the following equation is used:

(EQ. 36)

For both stencils, if

(EQ. 37)

Figure. … Proposed stencils for the 2D Cartesian domain.

Then the equations 33, 34, 35 and 36 are reduced to a second order equation of the form , that can be shortened to

(EQ. 38)

for which the coefficients , , and can be given as

(EQ. 39)

The value g(h) for the first and second order arithmetic systems, as well as the stencil orientation, is given in Table ….

Stencil

First Order System

Second Order System

S1

S2

Table … The value of g(h) for the first and second order systems

Upwind Condition

When solving the quadratic equation, one gets two solutions. The minimum is excluded, as it does not expand the equation. Nevertheless, the solution must meet the causality relationship. Nonetheless, because the value of T(x) is not known from the beginning, the computed solution T(x) must be tested to see if it is greater than the two adjacent neighbors T1 and T2 that took part in the solution. If the test is true, then the solution is accepted. In any other case, the next solution is accepted:

(EQ. 40)

Another method (which is better) is to derive the upwind condition before solving the equation. As the solution is given as

(EQ. 41)

and the upwind condition requires to meet the following two rules. First T(x) > T1 and second T(x) > T2

(EQ. 42)

After some algebraic manipulations, it is easy to show that the following condition must be satisfied:

(EQ. 43)

The first and second-order calculations of the directional derivatives can be put in the form

(EQ. 44)

The values of kv and nv depend on the order of approximation and stencil orientation. Substituting equation 44 into 25 and solving for T(x) will result in the coefficient values of equation 43.

(EQ. 45)

Substituting equation 45 into equation 43 gives us the following upwind condition:

(EQ. 46)

{add them to the references[24], [25]}, a more limiting condition of equation 47 is derived, which not only guarantees the upwind condition, but also forces the quadratic solution to be real:

(EQ. 47)

3D Multistencils Fast Marching Method

The MSFM technique proposed for 2D images can be very easily prolonged for a 3D Cartesian domain. Lets consider the stencil Sw that crosses the lattice at the grid points l1, l2, p1, p2, q1, and q2 as shown in figure. 1c. Let the unit vectors , , and be alongside , , and , correspondingly. Let the angle between and be , the angle between and be and the angle between and be . Lastly, U1, U2, and U3 will be the directional derivatives along , , and correspondingly. An if we recall from the previous section that

(EQ. 48)

Henceforth,

(EQ. 49)

In 3D image, six stencils need to be used. The neighbor points closer to x are covered by S1, while the diagonal points will be covered by the rest of the stencils as represented in Figure. 3….

Again, we assume isotropic grids pacing: in order to simplify the discussion. Substituting degrees into equation 49 gives us and, therefore,

(EQ. 50)

For the first order computation of the directional derivative, the following equation is used:

(EQ. 51)

Whereas, for a second order calculation of the directional derivative, the following equation is calculated:

(EQ. 52)

For all stencils, if

(EQ. 53)

then equations 50 and 51 are able to be reduced to a second order equation of the form of which can then be simplified to:

(EQ. 54)

where the coefficients av , bv , and cv are given in equation 39.

Upwind Condition

Although it is a straightforward situation to derive the upwind for a 3D domain, it has a very complicated closed-form solution and therefore is time consuming. A less high-priced method is to make sure that the calculated T(x) is greater than the values of the three closest neighbors T1, T2, and T3 that took place in the solution. If the check is true, then the answer is accepted. In any other case, T(x) is tested if it is larger than the values of the two remaining closest points. If the check result is positive, then the quadratic equation is calculated based on their values, and the maximum answer is returned. In any other case, the following answer is accepted:

(EQ. 55)

Accuracy

Time

T1

Source

Points

(51,51)

(1,1)

51,35) and (51,67)

Method

Error

L1

L2

L∞

L1

L2

L∞

L1

L2

L∞

FMM

0.746104

0.697090

1.314846

0.859562

0.951622

1.541677

0.617013

0.496484

1.200993

MSFM

0.291622

0.112154

0.577841

0.350694

0.167881

0.719245

0.274036

0.098130

0.577841

FMMHA

0.352778

0.148263

0.580275

0.346720

0.148236

0.586300

0.300247

0.110171

0.536088

MSFM 2nd Order

0.069275

0.007711

0.215501

0.058817

0.004436

0.212368

0.074058

0.009852

0.420539

Table 5… represents the error norms calculated from different source point of a 101x101 image.

Fig. 4. Isocontours of Ta(x). (a), (b), and (c) T1 are generated from different source points. (d) T2 . (e) T3 . (f) T4 .



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now