Instruction Of Style Of Papers

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

1Department of aaaaa, Xxxxx, University of Technology, Country (e-mail)

2 Yyyyy, Research Center, Country (e-mail)

A B S T R A C T

A R T I C L E I N F O

Every paper should include an abstract with a maximum of 200 words. The abstract should include the problem explanation, methods used for solution, and the significant results.

Article history :

Keywords :

"Keywords:" (bold, italic, and followed by a colon) followed by 3 to 6 words that describe the focus and contribution of the paper.

Introduction

Scheduling includes the allocation and sequencing of activities that need to be performed in a set of limited available resources [1]. Generally, these problems can be defined by a set of n jobs that need to be processed by a set of m working stages. Several production workshops are defined by their different attributes that belong to the processing rout of their jobs. If jobs have the same route, i.e., they have to be processed first on stage 1, then on stage 2, and so on. This problem is called a flow shop. In the event that, each job has its own pre-determined of processing route. The problem is called a job shop. Now, if each job has to be processed on each one of the m stages. However, some of these processing times may be zero. There are no restrictions on the routings of each job. The problem becomes an open shop. So in this case, the scheduler is allowed to determine a route for each job and different jobs may have different routes [2]. However real production floors rarely use a single machine for each operation and sometimes processing times due to measurement errors and human impact on production processes are vague, but usually the open shop scheduling problems in which it is commonly assumed that each stage has one machine and processing time is deterministic. Indeed the purpose of replicating machines in parallel is to leveling the speed of the stages, to increase the revenue and capacity of the shop floor, or to reduce the impact of bottleneck stages on the overall shop efficiency.

In open shop scheduling problems, the processing order of operations is arbitrary. Thus, the solution space of an open shop problem is larger than that of job shop and flow shop problems. If the number of machines is more than two, then OSSP is NP-hard [3]. In open shop scheduling problem with parallel identical machines, three decisions must be taken following:

1-Processing route determination 2-Job sequence determination 3-Job assignment to machines inside each stage.

Therefore, this problem is at least as hard the hard OSSP and is a class of NP-hard problems. To solve such a problem in medium and large size, use of exact methods is often unpractical and requires the use of efficient metaheuristic methods.

Naderi er al. [] formulated an open shop scheduling problem to minimizing total tardiness. They presented four mixed integer linear problems for OSSPs and then investigated the complexity of these models. They designed GA and VNS and investigated the effect of various operators on the GA using Taguchi method. Seraj& Tavakkoli-Moghadam[] proposed mathematical programming model for multi-objective OSSPs to minimizing total weighted tardiness and total weighted completion times. They suggested TS algorithm for solving problem at medium to large size. Liaw[20] considered the problem of scheduling preemptive open shop to minimize total tardiness. He developed an efficient constructive heuristic for solving large-sized problems. He also presented a branch-and-bound algorithm for solving medium-sized problems. Mosheiov& Oron[21] addressed batch scheduling problems on an m-machine open shop to minimizing makespan and flow time. They assumed identical processing time jobs, machine and sequence-independent setup times and batch availability. Roshanaei et al. [22] considered non-preemptive OSSP where setup times were sequence-dependent on each machine to minimize makaspan. They also proposed two advanced metaheuristics: multi-neighborhood search simulated annealing and hybrid simulated annealing. Yu et al. [23] scheduled a museum visitor routing problem in the form of OSSP that setup times were sequence-dependent. They used SA to solve large-sized problems. Su et al. [24] studied two models of two-stage processing with flow shop at the first stage followed by open shop at the second stage to minimize the makespan. They proposed an integer programming model and a branch and bound algorithm for model 1 and a lower bound developed for model 2 as a benchmarks for the heuristic algorithms. Sedeno-Noda et al. [19] presented a method based on network flow regarding time window. They also, supposed that jobs were preemptable. Naderi et al. [4] investigated an open shop that each stage consists of a set of parallel machines to minimize total completion times. They suggested a mixed integer linear programming model for this problem. Also, they presented a memetic algorithm for solving the problem. Sevastianov& Woeginger [5] constructed an approximation scheme for multi-processor open shops to minimize the makespan.

The inherent uncertainty in the parameters of models is increasingly being taken into account in various fields. Moreover, there are several factors involved in real-world scheduling problems that are often vague or uncertain in nature. This is especially true when the factors that made human, considered into the problems. Thus, parameters are often faced with uncertainties.

Accordingly, production scheduling problems can be divided into two general categories: deterministic scheduling and uncertain scheduling problems [6, 7]. There are basically two approaches to deal with uncertainties [8], such as the stochastic-probabilistic theory and possibility theory or fuzzy set theory [9, 10].

In this practice, fuzzy set theory is applied for dealing with the uncertainties in scheduling problems. It provides an appropriate alternative framework for the mathematical modeling for real-world systems and offers several advantages associated with the use of heuristic approaches:

Probability theory needs considerable knowledge about the statistical distribution of the unknown parameters. Vs, fuzzy theory provide an effective way to model uncertainty even when no historical information is available [11].

Using stochastic-probabilistic theory includes comprehensive computation and requires thorough knowledge on the statistical distribution of the uncertain time parameters [12].

The use of fuzzy set theory reduces the computational complexity of the scheduling problem compared with the stochastic probabilistic theory [13].

One of the capabilities of fuzzy theory, the use of fuzzy rules in heuristic algorithms [7].

Konno& Ishii [25] presented a model for a preemptive open shop scheduling problem with fuzzy resource and allowable time. Their problem had bi-criteria to be maximized, i.e., minimum degree of satisfaction with respect to the intervals of processing jobs and, minimum satisfaction degree of resource amounts applied in the processing intervals. Palacios et al. [26] investigated the OSSPs where processing times were fuzzy. They suggested a GA algorithm to minimize average maximum completion time of jobs. Noori-Darvish et al. [27] addressed a OSSPs with Sequence-dependent setup times, fuzzy processing times and fuzzy due dates. They presented a new bi-objective possibilistic mixed-integer linear programming model to minimize total weighted tardiness and total weighted completion times. For solving small-sized instances, an interactive fuzzy multi-objective decision making (FMODM) approach, called TH method proposed by Torabi and Hassini, applied.

With regard to metaheuristics, Liaw [28] suggested an efficient local search algorithm based on the tabu search for OSSPs with the minimizing makespan. Andersen et al. [29] presented two algorithms namely a simulated annealing and a genetic algorithm. Panahi&Tavakoli-Moghadam [30] offered an efficient method based on Multi-Objective Simulated Annealing and ant colony optimization for an open shop scheduling problem with minimizing makespan and total tardiness. They also applied a decoding operator to improve the quality of produced schedules. Low&Yeh [31] presented a hybrid genetic algorithm that incorporated a local improvement procedure based on the tabu search into the classic genetic algorithm for OSSPS. Sha et al. [32] proposed a particle swarm optimization (PSO) algorithm for OSSPs with multi-objective. Due to the discrete of scheduling problems and PSO application in continuous optimization problems modified the particle position representation, particle velocity, and particle movement. Naderi et al. [33] presented Electromagnetism-like metaheuristic (EM) for open shop scheduling problem with sequence-dependent setup times. They incorporated a fast search engine and a simple simulated annealing to improve algorithm performance.

In this study, we present a mixed-integer fuzzy programming (MIFP) model for open shop scheduling problem with a set of parallel machines at each stage. The rest of the paper is as follows. A MIFP formulation of the problem under study is set out in Section 2. In Section 3, we suggest an interactive fuzzy satisfying solution procedure to the proposed model. Computational results indicate that the MIFP model can be solved in reasonable CPU time to run, for only limited number of jobs. For problems with larger number of jobs, we describe an Electromagnetism-Like algorithm in Section 4. We describe the experimental design to evaluate the posed method in section 5. Finally, concluding remarks are given in Section 6.

A MIFLP formulation of the problem

In formulating scheduling models, parameters such as job processing, ready and setup times are generally considered as deterministic values. However, in real-world situations, these parameters are often uncertain values. Time required to process parts on machines cannot be determined exactly due to measurement errors and the involvement of human activities in the manufacturing process. Due to the inconsistency in the performance of operators and machines at the shop floor, repeated measurement of the system’s parameters provides a certain range of values. Therefore, the information that we have about the model parameters is often vague and imprecise [14, 6]. In a situation where we lack enough information to define the parameters, qualitative expression described by linguistic variables like ‘too short’ or ‘about 100’ are often used based on ambiguous data. In fact, fuzzy set theory provides the tools to deal with uncertain model parameters, which are not as deterministic values but rather as interval values representing estimates [15].

In this section, we formulated a mixed-integer fuzzy linear programming (MIFLP) model for open shop scheduling problem with a set of parallel machines at each stage that presented by Yimer & Demirli[34] . The parameter that is related to uncertain time (processing) is offered by triangular fuzzy sets. Here, the objective will be minimizing the makespan Cmax, that is, the time lag from the start of the first operation until the end of the last one. A problem often denoted FuzzPOm||Cmax in the literature.

Nomenclature

We need to introduce the notations including parameters, indices and variables used in the model. The parameters and indices are defined in Tables 1 , 2 and 3.

Table :Indices used in the models

Index

For

Scale

Jobs

Stages

Machines

Table :Deterministic Parameters used in the models

Deterministic Parameters

Description

The number of jobs

The number of stages

The number of identical machines in stage i

The operation of job j in stage i

A large positive number

Table :Fuzzy parameters used in the models

Fuzzy parameters

Description

The processing time of

The completion time of

imprecise makespan

Marketing above the symbol indicates that these variables represent vague values or fuzzy numbers.

Binary integer

Xj, i, l 1 if Oji is processed after Ojl

or 0 otherwise. i ∈ {1, 2 . . . . m − 1}, l > i.

Yj,i,k 1 if Oji is processed after Oki

or 0 otherwise. j ∈ {1, 2, . . . , n − 1}, k > j.

Zj,i,r 1 if Oji is processed on rth machine in stage i

or 0 otherwise. r ∈ {1, 2, .…mi}.

General variables

fuzzy solution space

crisp solution space

a feasible solution vector of decision variables

fuzzy goal satisfying level

The proposed model

Fuzzy goal function: The objective is to minimize the completion time of the last delivery among the n jobs, commonly referred to as the makespan. It is related to the throughput of the schedule. Because throughput is defined as the amount of work completed per unit time, and because the amount of work in the n-job model is fixed, we maximize throughput by minimizing the makespan[16].

The fuzzy objective function (1) gives the imprecise makespan of all jobs:

(1)

Crisp solution space: The constraint related to the each job is processed by only one machine at each stage does not depend on the fuzzy time variables. So, it is considered to be crisp.

(2)

(11)

Fuzzy solution space: constrains related to the time imprecise parameter, belong to the space of fuzzy solution. The fuzzy constraints include:

(3)

(4)

(5)

(6)

(7)

(9)

(10)

Constraint set (3) assures that the completion time of each operation must be greater than its processing time. Constraint sets (4) and (5) specify the relation between each pair of operations of a job. For example, the completion time of Oj,i must be greater than that of Oj,l if job j visits stage i after stage l. Similarly, constraint sets (6) and (7) define the relation between the completion times of each pair of jobs in each stage. For example, the completion time of Oj,i must be greater than that of Ok,i if job k proceeds job j in stage i if they are processed by the same machine. Constraint sets (9)–(10) define the decision variables.

Fuzzy goal programming

The imprecise and vague time-dependent parameters are expressed by fuzzy sets. The degrees of membership functions for the fuzzy numbers parameters are defined based on psychic judgments. Symmetric triangular fuzzy number is the simplest form function of fuzzy numbers, which is made of two basic estimations, the most possible value, and the maximum deviation from it [17]. For example, a symmetric triangular membership function for a fuzzy processing timecan be defined by:

Values of the left and right of the center have the lowest likely to belong to the set of possible values, so, their membership degree is zero. The most likelihood value, which is in the middle of the bound, has the highest degree of membership.Other values in the span of, will assume to be a linearly varying membership function in the interval [0, 1]. Figure1 shows a symmetric triangular membership function for. Also, the fuzzy objective function can be defined in terms of two deterministic objective functions for makespan:

Where

(13)

Similarly, the fuzzy solution spacegiven by Esq. (3)-(7) can be defined as a combination of two sets of crisp constraints which are as follows:

(14)

Where

And

(15)

A fuzzy decision is obtained by considering the intersection of the fuzzy objective and the whole space solution. When information related to the objective function and constrains sets is vague, the problem can be formulated as a goal fuzzy programming problem which is described below:

Find:

(16)

To satisfy:

Where

is a solution vector of decision variables in feasible solution space, and related to the goal fuzzy objective. The symbol "" in the constrain indicates that the resulting makespan should be around expected value with some symmetric deviation on both sides.

Solution approach

For the problem is presented in previous section, the objective function will be a triangle symmetric possibility distribution. This function can be defined by three vertices.

In fact minimization is obtained by moving the three vertices towards origin; under this condition, the problem becomes a certain multi-objective linear programming by converting into three interdependent crisp objectives [17].

Indeed, three objective functions: Minimizing the most possible value, maximize the possibility to obtain lower objective function and to minimize the risk of getting high objective function:

(17)

, Represents the symmetric deviation from the fuzzy number.

By using fuzzy decision making of Bellman and Zadeh[18] and fuzzy programming method of Zimmermann[], MOLP problem can be transformed into single goal linear programming problem. The initial values ​​are obtained for the positive and negative ideal solutions by solving each of the above functions separately:

(18)

By using membership functions outlined below, the objective functions are converted into fuzzy goals.

(19)

Applying membership functions expressed and the fuzzy decision of Bellman and Zadeh[18],the MOLP problem can be represented:

(20)

Finally, by introducing an auxiliary fuzzy goals satisfying level, the MOLP problem can be reduced to single objective formal LP problem of Zimmermann[]:

(21)

In Eq. (21) high value of indicates that the objective functions are optimized with a high degree of confidence.

Numerical Example: a small sized problem consisting n=4 and m=2 where we have m1=1 and m2=2. Corresponding fuzzy processing times are tabulated in Table 4.

Applying Eq. (18), the positive and negative ideal solutions for the three objective functions are computed as follows:

(18)

The auxiliary model is solved using software Cplex10.0 installed on a PC with 2.0 GHz Intel Core 2 Duo and 2 GB of RAM memory. The three auxiliary objective functions are optimized concurrently with a degree of satisfaction level, and.The expected triangular fuzzy number for the imprecise objective function is obtained from,and, which implies that,. Since our output is continuous and symmetric, the most probable value can be considered as the defuzzified value of the imprecise makespan.

Table :jobs processing time

stages

jobs

2

1

(2,7,12)

(1,4,7)

1

(2,6,10)

(1,3,5)

2

(2,5,8)

(1,2,3)

3

(2,8,14)

(3,5,7)

4

Proposed discrete Electromagnetism-like algorithm

As mentioned in section1, the problem considered in our study belongs to class of NP-hard problems. So, for solving medium to large size problems, we suggest an efficient discrete electromagnetism-like (DEM) algorithm.

Classic EM

Electromagnetism-like (EM) is one of new methods in the field of optimization based on the swarm intelligence. It was introduced by Birbil and Fang []. The main idea of EM is based on the attraction-repulsion mechanism of electromagnetism theory (Coulomb’s law). In this algorithm each solution is considered as a charged particle and the charge of particle is belonged to its objective function value. The scale of absorption or desorption on candidate solutions in the population is determined by this charge. The route of this charge for particle i is determined by adding the exerted pressure of the other particles on particle i. In this mechanism, a particle with superior objective function value attracts the others ones, while a particle with inferior objective function value excretes the others ones. The charge for each particle is calculated by the following formula:

(19)

In Eq. (19), and denote the objective function value of particle i and the best solution. The force of particle i is calculated as follows:

(20)

Procedure Electromagnetism algorithm

Initialize ()

While (hasn’t met stop criterion) do

Local Search ()

Calculate total force F ()

Move particle by F ()

Evaluate particles ()

End while

The general scheme of EM is shown in Fig. 1. It includes four phases: initialize, computing of total force exerted over each particle, moving particles in the direction of the force and, local search.

Figure : The fundamental procedures of EM

Proposed DEM

Although the results of applying the EM was very satisfactory for continuous space problems but these results was not enough for discrete space problems [18].the main reason that EM cannot be used for discrete problems is that its operators (force calculation and movement) are not compatible with this type of spaces.

Since the scheduling problems are in the category of discrete problems, in this research, we have developed the classical EM to DEM that is described in:

Encoding and decoding outlines, initialization

Coding scheme is a procedure which makes an algorithm is able to identify a solution. One of these schemes is permutation list. In this method a string that contains n×m array is designed. In fact, we produce a random permutation of the elements of the set (n=jobs number, m= stages number). Suppose, n=2 and m=3, the string is generated by the following:

3

2

6

1

4

5

Figure : illustrates a permutation list.

Fig.2. indicates that an operation is placed in sequence according to its corresponding number in the second string. According to Fig. 2. At first job3 is processed over stage1, and then job2 is processed over stage2, etc.

Non delay schedule is applied to decode the permutation list.

%R is a set of operations whose starting times are equal to y

Choose O* from the set of R with the earliest relative position in permutation θ

Extract O* from U

End While

Non delay schedule: this schedule is investigated under the terms makespan, the search space is reduced by this decoding so that the optimal solution does not disappear from it. We apply procedure proposed in [11] and later used in [10]: all operations are placed in a set (U) including unscheduled operations. We calculate y which equals the minimum of the earliest possible starting times (sij) of operations in U. All the operations whose starting time is equal to y are assigned to a set called R. Among the operations in R, the operation O* with the earliest relative position in permutation θ is scheduled and extracted from U. In this decoding, we assign the jobs to the first available machine at every stage. Fig. 3 illustrates the decoding scheme.

Figure : The procedure of decoding scheme by the principal of non-delay schedule

Calculating total force and particles movement

This study applies the modified EM that proposed by Debels et.al. [] to obtain the total force exerted on the particle. In this procedure dose not determined the force exerted on particle i from particle j by using the fixed charge of qi and qj. In place of, qij is related to the relative difference of f(xi) and f(xj).

In the proposed algorithm, the roulette-wheel is used to select particle i and particle j. After selecting two particles the particle charge is computed as follows:

(21)

If the objective value is larger than, particle j will attract particle i. from the other point of view, when , particle i will attract particle j and there is no action when. More, the force exerted on particle i by particle j is calculated as follows:

(22)

Now, the particle move from solutiontoin the direction of.The definitions of the operator and operator are as follow.

The subtract operator . This is applied as Position-based Crossover and Linear Order Crossover that following in:

Position-based Crossover: If, value (n= the number of dimension) is rounded to up, then to the size of L, is randomly selected dimension from particle i and moved to new particle and the rest of numbers chosen from particle j. if above procedure is reversed (place two particles are reversed).Fig.4 is shown the implementation steps of the operator. Suppose the permutation of the particle i and j is the following and, so the number dimensions of each particle are 6. Then L=6×0.23=1.38. Becausewe randomly select 2 dimensions of particle j, that dimensions 1 and 5 are selected and transferred to new particle. We remove the numbers of particle j that are selected from particle i and place the rest into the new particle according to their same order in particle i.

Liner order crossover (LOX): at first introduced by Falkenauer & Bouffouix [35], works as follows:

A subsequence of operations from a parent is randomly selected, and then is created the initial part of the offspring by copying the subsequence into the corresponding position of it. The operations that are currently in the subsequence from the second parent are deleted and finally the operations are placed into the unfixed positions of the offspring from left to right according to the order of the sequence.

This procedure is shown in Fig. 5.

3

6

4

2

5

1

Parant1

5

6

4

2

3

1

Offspring

5

4

6

1

2

3

Parant2

Figure : Illustration of the Position-based Crossover operator

Selected subsequence

6

5

4

3

2

1

Parant1

1

5

4

3

2

6

Offspring

5

1

4

2

3

6

Parant2

Figure : Illustration of the LOX crossover operator

The add operator. This operator can be considered as Extension of Precedence Preservative Crossover [36] and called EPPX. EPPX is shown as follows: a string of equal length as the particle is produced then all of its elements are filled with random number at [0, 1]. This string defines the order in which elements are successively drawn from and . The offspring is initially empty. Start with the first element of and , when the kth element is selected, if () corresponding number inis transferred to offspring, if selected element comes from and dth element in, then delete element from, and shift the elements of between point k and d right once. The step is repeated until, are empty and offspring is obtained. Fig.6 describes an illustration of EPPX. Suppose.

3

6

4

2

5

1

0.82

0.43

0.56

0.5

0.1

0.2

string

5

4

6

1

2

3

3

offspring

6

4

2

5

1

5

4

6

1

2

2

3

6

4

5

1

5

4

6

1

4

6

5

1

2

3

offspring

Figure : An illustration of EPPX

Local procedure

This algorithm selects the best solution in the each iteration and perturbs the solution by moving the two points at random, and then finds its objective value. If the objective value of the new solution is better than the best solution, the new solution will replace it. Otherwise If the objective value of the new solution is worse than the best solution, and is better than the worst solution, it will replace the worst solution. So the worst solution is found and this new solution will replace it. Therefore, it attempts to improve average solution iteratively.

Algorithm’s calibration

Parameter setting is an important part of the designing algorithms because we can adapt algorithm to the problem. So in this section, the behavior of DEM with different operator and parameters are appraised. Several DEMs can be obtained with different combinations of parameters and operators.

Between the alternative experimental examinations the Taguchi method is more efficient for calibrating the algorithm because it can survey generous decision variables with a small number of experiments [37]. In the Taguchi method, factors are categorized into two main groups: controllable and noise factors. Noise factors are those that we have no direct control over them. Since the removal of these factors is often impossible, the Taguchi method seeks to minimize the impact of these factors and to determine the optimal level of controllable factors [38]. Taguchi studies the impact of factors on the response variable variance and then based on the mean response variable determines the impact of the factors that are not effective on the variance. The main reason why Taguchi method is regarded as the design is that it tries to adjust the stability of the algorithm so that uses the ratio S/N which in fact determines ratio Signal to Noise. Taguchi classifies all objective functions into three groups: the smaller-the-better type, the larger-the-better type, and nominal-is-best type. Considering that almost all functions in scheduling are the smaller-the-better type, their corresponding S/N ratio [39] is.

(23)

Table5 shows the factors that need to be tuned with their levels.

From standard table of orthogonal arrays, the L8 is chosen as an orthogonal array for the algorithm. We generate a set of 25 instances as follows: we have 5 combinations (4×4, 5×5, 7×7, 10×10, and 15×15). There exist five replicates with different mi (number of parallel machine in each stage), that is generated from a uniform distribution over (2, 4) for each combination thus summing up to 25 instances. The processing times are randomly generated from a uniform distribution over (1, 99). In order to conduct the experiments, we implement DEM in C# and run on a PC with 2.0 GHz Intel Core 2 Duo and 2 GB of RAM memory. We use relative percentage deviation (RPD) as a common performance measure to compare the methods. RPD is calculated as such:

23

Where Algsol is Cmax obtained for a given algorithm and instance and Minsol is the lowest Cmax for a given instance obtained by any of the algorithms.

Table :Factors and their Levels

Factors

Level

Type

Crossover Operator

2

(1) Position-based Crossover

(2) Linear Order Crossover

Population Size

3

10 20 40

Number of Local Search

3

15 25 50

We run DEM for each trail of Taguchi experiment. Table 6. Shows The results are transformed into S/N ratio. Fig.7 shows the mean ratio obtained for each level of the factors. The optimal level of factors becomes: Crossover: Position-based, Population-Size= 20, Local Search number= 50.

Table : The results are transformed into S/N ratio

Cross_Type

Pop_Size

Local_No

Trail 1

Trail 2

Trail 3

Trail 4

Trail 5

S/N

1

10

15

20.33

25.61

28.82

26.69

21.90

-27.91

1

10

25

21.06

22.07

27.80

18.72

26.41

-27.41

1

10

50

19.51

17.63

16.34

18.00

18.37

-25.11

1

20

15

10.84

14.13

11.58

12.69

14.08

-22.10

1

20

25

10.51

10.74

12.70

10.59

13.95

-21.42

1

20

50

9.91

8.65

6.72

7.92

5.54

-17.95

1

40

15

17.06

16.80

16.18

10.61

13.94

-23.59

1

40

25

15.11

17.13

16.56

15.81

15.89

-24.14

1

40

50

18.54

16.57

19.59

18.83

20.18

-25.47

2

10

15

25.60

25.11

20.27

20.55

22.56

-27.21

2

10

25

24.19

24.93

23.45

25.92

25.31

-27.88

2

10

50

23.78

20.72

19.28

17.74

21.28

-26.30

2

20

15

18.67

16.54

14.54

15.60

14.92

-24.15

2

20

25

13.69

14.64

15.16

14.74

17.25

-23.60

2

20

50

12.07

13.77

12.55

13.83

10.96

-22.07

2

40

15

16.70

15.16

16.69

16.78

14.27

-24.06

2

40

25

17.76

17.67

19.00

19.53

18.03

-25.30

2

40

50

21.12

20.70

18.03

20.41

19.55

-26.02

Figure : The mean S/N ratio plot for each level of the factors

Experimental result

In this section, we intend to appraise the FMILP model and the proposed DEM algorithm. At first small-sized problems are solved to evaluate the mathematical model and also DEM algorithm against the results obtained from the model. We implement FMILP model in CPLEX 10.1 and the algorithms in Matlab 7.0 and run on a PC with 2.0 GHz Intel Core 2 Duo and 2 GB of RAM memory. In this paper, the stopping criterion used when testing all instances with the algorithms is n × m × 0.4 s.

For the experimental study we use the best bed given in [40], where the authors follow [41] and generate a set of fuzzy problem instances from well-known benchmark problems from [42].in fact each crisp processing time t is converted into a symmetric fuzzy processing time p(t) so that a certain value is p2 =t and p1 , p3 are random values, symmetric w.r.t and generate so the TFN’s maximum range of fuzziness is 30% of p2.under these conditions, the optimal solution of the crisp problem provides a lower bound for expected fuzzy makespan[41]. 10 fuzzy instances were generated from each crisp problem instance. So in total there are 250 problem instances. Since the investigated problem is the fuzzy open shop scheduling problem with parallel machines at the stage so at first we suppose mi=1 to evaluate the model and algorithm and then investigate states that mi ≠1. So to indicate performance, our algorithm compares with algorithm is proposed Chnge et al. [43].

Table 6, 7 and 8, show a summary of the result. Computational results for Small to medium size and for large size under the term of mi=1 in table 6 and 7 are respectively collected. Computational results for small to large size under the term of mi ≠1in table 8 are collected. In these tables Lower Bound, MILP Model, DEM and ME respectively indicate to solve crisp instances by Fortemps [41], solve fuzzy instances with CEPLEX10.1, solve fuzzy instances using the proposed algorithm and solve fuzzy instances using the suggested algorithm by Chang et al. [43].The mathematical model is allowed a maximum of 1000 s of computational time. As shown in table 6, the results obtained from mathematical model are not much difference with the lower bound (LB) that these difference can due to numbers are fuzzy. The results obtained from the DEM algorithm and the EM algorithm of Chang very little difference have with Model and LB. For the large size problems that are shown in table 7. The mean RPD of DEM is 9.218% and the mean RPD of EM is 12.79% .According to the mean RPD can be realized the effectiveness of DEM compared to EM. Also table 8 which represents the term of mi ≠1, the mean RPD of DEM is 9.12073% and the mean RPD of EM is 13.05%. So we can realize the effectiveness of DEM compared to EM.

Table :Small Size Experiments (Fuzzy without Parallel, , mi=1)

MILP Model

DEM

MEM

Problem

Lower Bound

Cmax

CPU Time

Cmax

CPU Limit Time

Cmax

CPU Limit Time

Tail_4×4_1

193

205.28

20.10

206.79

6.40

208.87

6.40

Tail_4×4_2

236

251.10

19.58

251.129

6.40

254.65

6.40

Tail_4×4_3

271

284.29

18.59

287.01

6.40

287.81

6.40

Tail_4×4_4

250

266.92

22.29

264.246

6.40

269.85

6.40

Tail_4×4_5

295

311.74

20.25

312.322

6.40

318.15

6.40

Tail_4×4_6

189

200.79

12.48

198.511

6.40

198.91

6.40

Tail_4×4_7

201

212.95

15.52

213.995

6.40

211.29

6.40

Tail_4×4_8

217

228.96

12.80

229.576

6.40

230.60

6.40

Tail_4×4_9

261

276.91

24.77

275.302

6.40

282.77

6.40

Tail_4×4_10

217

229.46

20.33

232.099

6.40

235.61

6.40

Tail_5×5_1

300

321.90

54.91

325.22

10.00

325.48

10.00

Tail_5×5_2

262

281.12

47.09

284.135

10.00

284.12

10.00

Tail_5×5_3

323

344.89

74.99

345.025

10.00

345.15

10.00

Tail_5×5_4

310

328.72

45.08

326.903

10.00

333.96

10.00

Tail_5×5_5

326

350.29

88.53

350.54

10.00

349.91

10.00

Tail_5×5_6

312

334.65

88.55

339.991

10.00

341.41

10.00

Tail_5×5_7

303

322.06

37.23

323.352

10.00

328.40

10.00

Tail_5×5_8

300

318.956

81.23913

323.346

10.00

321.983

10.00

Tail_5×5_9

353

373.912

95.11193

380.006

10.00

377.009

10.00

Tail_5×5_10

326

347.477

81.83055

350.934

10.00

352.562

10.00

Tail_7×7_1

435

514.438

1000

466.589

19.6

470.552

19.6

Tail_7×7_2

443

512.912

1000

482.432

19.6

487.322

19.6

Tail_7×7_3

468

550.592

1000

510.785

19.6

510.356

19.6

Tail_7×7_4

463

522.257

1000

509.027

19.6

499.813

19.6

Tail_7×7_5

416

468.926

1000

459.151

19.6

465.336

19.6

Tail_7×7_6

451

537.998

1000

483.587

19.6

504.25

19.6

Tail_7×7_7

422

495.524

1000

456.472

19.6

462.28

19.6

Tail_7×7_8

424

507.255

1000

469.284

19.6

469.137

19.6

Tail_7×7_9

458

520.755

1000

502.55

19.6

502.515

19.6

Tail_7×7_10

398

461.154

1000

424.128

19.6

438.36

19.6

Table :Large Size Experiments (Fuzzy without Parallel , mi=1)

Problem

Lower Bound

DEM

MEM

Cmax

RPD%

Cmax

RPD%

Tail_10×10_1

637

701.191

10.077

683.105

7.238

Tail_10×10_2

588

660.677

12.36

645.313

9.747

Tail_10×10_3

598

662.027

10.7069

677.996

13.38

Tail_10×10_4

577

635.445

10.1291

647.966

12.3

Tail_10×10_5

640

660.47

3.19846

680.993

6.405

Tail_10×10_6

538

599.37

11.407

631.998

17.47

Tail_10×10_7

616

659.793

7.10933

699.251

13.51

Tail_10×10_8

595

636.899

7.04183

635.84

6.864

Tail_10×10_9

595

631.478

6.1307

642.759

8.027

Tail_10×10_10

596

635.514

6.62982

656.426

10.14

continues Table :Large Size Experiments (Fuzzy without Parallel , mi=1)

Problem

Lower Bound

DEM

MEM

Cmax

RPD%

Cmax

RPD%

Tail_15×15_1

937

1054.57

12.5472

1067.52

13.93

Tail_15×15_2

918

998.459

8.76455

1032.65

12.49

Tail_15×15_3

871

936.259

7.49239

1000.52

14.87

Tail_15×15_4

934

1019.01

9.10169

1070.66

14.63

Tail_15×15_5

946

1023.21

8.16196

1102.73

16.57

Tail_15×15_6

933

1010.85

8.34393

1047.81

12.31

Tail_15×15_7

891

971.593

9.04521

1046.32

17.43

Tail_15×15_8

893

967.635

8.3578

967.39

8.33

Tail_15×15_9

899

1009.21

12.2594

958.456

6.614

Tail_15×15_10

902

1010.7

12.0505

1040.85

15.39

Tail_20×20_1

1155

1267.75

9.76153

1370.99

18.7

Tail_20×20_2

1241

1386.03

11.6863

1424.07

14.75

Tail_20×20_3

1257

1424.63

13.3355

1406.3

11.88

Tail_20×20_4

1248

1319.4

5.72081

1423.62

14.07

Tail_20×20_5

1256

1397.88

11.2966

1485.69

18.29

Tail_20×20_6

1204

1281.91

6.47074

1355.98

12.62

Tail_20×20_7

1294

1442.75

11.4954

1542.63

19.21

Tail_20×20_8

1169

1293.97

10.6902

1281.93

9.661

Tail_20×20_9

1289

1380.41

7.09122

1485.19

15.22

Tail_20×20_10

1241

1341.23

8.07672

1386.62

11.73

Average RPD %

9.218

12.79

Table : Experiments in Fuzzy with Parallel Form (mi≠ 1)

Problem

Lower Bound

DEM

MEM

CPU Limit Time

Cmax

RPD%

Cmax

RPD%

5×5×2_1

119

127.979

7.54516

133.241

11.97

10

5×5×2_2

164

177.593

8.28822

187.071

14.07

10

5×5×2_3

123

135.179

9.90168

138.202

12.36

10

5×5×2_4

135

147.577

9.31617

149.58

10.8

10

5×5×2_5

225

246.379

9.50176

258.334

14.82

10

10×10×3_1

406

439.241

8.18741

457.75

12.75

40

10×10×3_2

385

417.308

8.39163

432.755

12.4

40

10×10×3_3

308

338.336

9.84942

351.4

14.09

40

10×10×3_4

220

241.075

9.57977

244.08

10.95

40

10×10×3_5

315

344.734

9.4393

357.659

13.54

40

15×15×4_1

376

413.1

9.86691

422.447

12.35

90

15×15×4_2

359

403.436

12.3777

412.047

14.78

90

15×15×4_3

408

452.204

10.8343

477.862

17.12

90

15×15×4_4

463

524.975

13.3854

527.437

13.92

90

15×15×4_5

425

467.509

10.002

479.794

12.89

90

20×20×5_1

625

682.046

9.12741

720.544

15.29

160

20×20×5_2

535

579.805

8.3747

585.372

9.415

160

Continues Table : Experiments in Fuzzy with Parallel Form (mi≠ 1)

Problem

Lower Bound

DEM

MEM

CPU Limit Time

Cmax

RPD%

Cmax

RPD%

20×20×5_3

598

659.51

10.2859

657.675

9.979

160

20×20×5_4

559

605.285

8.27991

622.233

11.31

160

20×20×5_5

620

667.136

7.60256

685.706

10.6

160

25×25×5_1

900

967.28

7.47555

996.243

10.69

250

25×25×5_2

935

1011.99

8.23374

1096.86

17.31

250

25×25×5_3

1035

1107.55

7.00928

1144.27

10.56

250

25×25×5_4

795

870.295

9.47106

930.986

17.11

250

25×25×5_5

1072

1161.76

8.37359

1231.95

14.92

250

30×30×5_1

1359

1457.62

7.25683

1524.04

12.14

360

30×30×5_2

1109

1200.04

8.20957

1251.86

12.88

360

30×30×5_3

1273

1375.01

8.01362

1452.01

14.06

360

30×30×5_4

1236

1363.56

10.3205

1400.87

13.34

360

30×30×5_5

1085

1192.99

9.95301

1245.07

14.75

360

Average RPD %

9.12073

13.05

Also we carry out an analysis of variance (ANOVA) test to investigate performance two algorithms. Table 9 shows the results of ANOVA. Since p-value <0.05 so can be said that there are significant difference between the two algorithms. Also by Fig.8 can be realized the efficiency of the algorithm DEM.

Table : ANOVA: Results versus Algorithms

Source

df

SS

MS

F

P-value

Algorithms

1

234.85

234.85

71.78

0.00

Error

58

189.75

3.27

Total

59

424.60

Figure : mean effect plot for algorithms

Conclusion



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now