Genetic and Hybrid Algorithm Approaches to Flow Shop Scheduling


Master's Thesis, 2016

107 Pages, Grade: 5


Excerpt


TABLE OF CONTENTS

1. INTRODUCTION

2. FLOW SHOP SCHEDULING: PROBLEM CHARACTERIZATION
2.1 Practical implementation: Johnson’s algorithm
2.2. Relevance of the flow shop scheduling problem

3. FLOW SHOP SCHEDULING: SOLUTION METHODS
3.1. Advances in integer linear programming
3.2.Integer linear programming: suitable problem structures
3.3.Integer linear programming: branch and bound method
3.3.1. Branch and bound method: practical implementation
3.4. Fractional cutting plane method
3.4.1. Fractional cutting plane method: practical implementation
3.5. Cutting plane methods
3.6. Branch and cut method overview
3.6.1. Branch and cut method: practical implementation
3.7. Genetic algorithms
3.7.1. Genetic algorithms: parameter settings
3.7.2. Genetic algorithms: encoding
3.7.3. Genetic algorithms: convergence properties
3.7.4. Genetic algorithms: practical implementation
3.8. NEH heuristic
3.8.1. NEH algorithm: practical implementation

4. PROPOSED SOLUTIONS
4.1. Flow shop scheduling: limits of integer linear programming formulations
4.2. Flow shop scheduling: crossover and mutation operators
4.2.1. Order one crossover
4.2.2. Mutation
4.2.3. Selection
4.2.4. No free lunch theorems for optimization and proposed solutions

5. ANALYSIS OF RESULTS AND FUTURE RESEARCH DIRECTIONS
5.1. Quantitative analysis
5.2. Qualitative analysis
5.3. Future research directions

6. CONCLUSION

KOKKUVÖTE

7. APPENDIX

8. BIBLIOGRAPHY

LIST OF FIGURES

Figure 2.1: Input output relations

Figure 2.2.1: Makespan for sequence J1,J2,J3,J4,J5

Figure 2.2.2: Johnson’s algorithm step two

Figure 2.2.3: Johnson’s algorithm step three

Figure 2.2.4: Johnson’s algorithm optimal solutions

Figure: 2.2.5: Comparison between optimal and initial sequences

Figure 3.1: Common methods used to solve the permutation flow shop problem

Figure 3.2: Branch and bound practical implementation tree

Figure 3.3: Valid inequalities

Figure 3.4.1: Branch and cut initial relaxation

Figure 3.4.2: Branch and cut optimal result compared to relaxation

Figure 3.5: Branch and cut tree

Figure 3.6: Fuzzy controller for mutation and crossover operators

Figure 3.7.1: One-cut point crossover

Figure 3.7.2: Creating new offspring

Figure 3.7.3: Mutation

Figure 3.8.1: Insertion of job two

Figure 3.8.2: Insertion of job four

Figure 3.8.3: NEH final result

Figure 4.1: Insertion neighborhood

Figure 4.2: Interchange move

Figure 4.3.1: Crossover step one and two

Figure 4.3.2: Crossover step three

Figure 4.3.3: Crossover step four

Figure 4.3.4: Crossover step five

Figure 4.3.5: Crossover step six

Figure 4.4: Pairwise exchange mutation

Figure 4.5: Linear ranking with different selective pressure levels

Figure 5.1: Kaplan Meyer survival curves -AGA versus AHGA1

Figure 5.2: Kaplan Meyer survival curves - AHGA1 versus AHGA2 (Treatment groups two and three)

Figure 5.3: Iterated greedy algorithm for flow shop scheduling

LIST OF TABLES

Table 1.1: Goals and logical structure

Table 2.1 : Permutation flow shop assumptions

Table 2.2: Flow shop scheduling complexity

Table 2.3: Asymptotic notation

Table 2.4: Problem data

Table 2.4.1: Johnson’s algorithm step one

Table 2.4.2: Johnson’s algorithm step three

Table 2.4.3: Johnson’s algorithm final step

Table 2.5: Complexity classes

Table 3.1: Combination strategies for metaheuristics

Table 3.2: Branch and bound Matlab code

Table 3.2.1: Nodes two and three

Table 3.2.2: Nodes four and five

Table 3.2.3: Nodes six and seven

Table 3.2.4: Nodes eight and nine

Table 3.2.5: Nodes ten and eleven

Table 3.2.6: Nodes twelve and thirteen

Table 3.2.7: Nodes fourteen and fifteen

Table 3.3: Fractional cutting plane method step one

Table 3.3.1: Fractional cutting plane method step three

Table 3.3.2: Fractional cutting plane method step four

Table 3.3.3: Fractional cutting plane method step six

Table 3.3.4: Fractional cutting plane method step seven

Table 3.4: Relations between cuts and problem structures

Table 3.5: Branch and cut algorithm description

Table 3.6.1: Matlab code for relaxation

Table 3.6.2: Branch and cut, step two, node two

Table 3.6.3: Branch and cut, step two, node three

Table 3.6.4: Branch and cut node four

Table 3.7.1: Roulette wheel selection

Table 3.7.2: Determining cutting points

Table 3.8.1: NEH algorithm problem data

Table 4.1 : Proposed solutions

Table 4.2.1: Feasible sequences

Table 4.2.2: Unfeasible/illegal sequences

Table 4.2.3: One point crossover with illegal parents

Table 4.2.4: One point crossover with feasible parents

Table 4.2.5: Order one crossover with feasible parents

Table 4.3.1: Linear ranking step one

Table 4.3.2: Linear ranking step two

Table 5.1: Summarized results

Table 5.2: Sample description

Table 5.3: Mean and median survival times

Table 5.4: Statistical significance

Table 5.5: Sample characteristics

Table 5.6: Mean and median survival times

Table 5.7: Pairwise comparisons

Code 4.1.: Order one crossover -Matlab implementation

Code 4.2: Mutation - Matlab implementation

Code 7.1: Julia code for ΛGA using Taillard instances

Code 7.2: Julia code for AHGA - local search with insertion neighborhood until best improvement

Code 7.3: Julia code for AHGA - local search with insertion neighborhood until first improvement

Code 7.4: Julia code for AHGA - local search with pairwise interchange until best improvement

Code 7.5: Julia code for AHGA - local search with pairwise interchange until first improvement

Code 7.6: Julia code for iterated greedy algorithm

LIST OF CODES

illustration not visible in this excerpt

MASTER’S THESIS OBJECTIVE

Considering the intractable nature of flow shop scheduling, the primary objective of this dissertation was to develop algorithms capable of mitigating the computational burden associated with obtaining a solution for this type of problem. In view of this goal, three procedures were created. The first refers to a GA that employed discrete event simulation and customized genetic operators as a means to eliminate the evaluation of unfeasible solutions and incorporate problem- specific knowledge. The others consist of hybrid methods that have improved the aforementioned framework with the addition of local search. Having specified the main goal, it is worth to describe the secondary objectives of this work: 1) characterize the problem, 2) introduce of fundamental and state-of-the-art solution methods and 3) evaluate the proposed solutions and propose means of improvement.

1. INTRODUCTION

Flow shop scheduling encompasses allocating a number of jobs in a previously ordered set of machines so that a determined objective function such as makespan is either minimized or maximized [1]. Despite the apparent simplicity of the problem, there is no known non enumerative polynomial time algorithm capable of solving this type of optimization, except for in those cases that can be treated by the Johnson’s algorithm or proportional flow shops.

Apart from the instances that admit efficient solution methods, the computational complexity of flow shop scheduling is quite challenging because the output of the problem grows at a factorial pace depending on the size of the input. In practical terms, this means that the number of permutation schedules that would need to be exhaustively evaluated in a flow shop consisting of ten jobs and m machines is 3,628,800. Moreover, including merely one additional job expands the quantity of potential candidates in a substantial manner. In this case, the solution space would be roughly ten times larger, meaning that 39,916,800 feasible schedules would need to be evaluated. Understanding the intractable nature of flow shop scheduling, the primary objective of this dissertation was to develop algorithms capable of mitigating the computational burden associated with obtaining a solution for this type of problem. Taking this goal into account, three procedures were created. The first refers to a GA that employed discrete event simulation and customized genetic operators as a means to eliminate the evaluation of unfeasible solutions and incorporate problem-specific knowledge. The others consist of hybrid methods that have improved the aforementioned framework with the addition of local search [2].

As will be explained in detail later, the posture adopted herein of focusing on metaheuristics is justified by the fact that traditional optimization methods such as integer linear programming [3] [4] [5] [6] and branch and bound are unable to obtain a solution to flow shop scheduling problems that consist of moderate to large instances in a reasonable amount of time. Despite the aforementioned challenges, a series of other important problems can often be solved using these algorithms without major inconveniences. In view of this progress, the theoretical part of this work will devote a substantial amount of attention to integer linear programming, with a particular emphasis on the branch and cut algorithm [7].

In addition to the advances verified in integer linear programming, the field of metaheuristics has also witnessed a remarkable amount of recent development. In this realm, a series of combination strategies or hybrid methods were created with the intent of improving the computational results of the many existing consolidated solution approaches. Even though other modes of hybridization may exist, the literature on the subject indicates that metaheuristics can be combined with the following techniques: local search, heuristics, integer linear programming, enumerative algorithms such as branch and bound, soft computing, problem-specific algorithms, artificial intelligence and even other metaheuristic. As will be evidenced later in further detail, methods that involve a combination with another metaheuristic or local search [8] often provide impressive results in terms of reducing the challenges related to the computational complexity of the flow shop scheduling problem.

Notwithstanding the achievements related to obtaining satisfactory solutions to NP-complete problems such as flow shop scheduling in a reasonable amount of time, one of the most relevant advances in the field of theoretical optimization comes from David Wolpert and William G. Macready’s no free lunch theorems [9]. In essence, this group of postulates refer to a framework that establishes the relations between algorithms and the problems they attempt to solve. To attempt to briefly summarize this matter, these theorems assert that there is no intrinsically superior optimization technique, meaning that if one approach works well in a determined problem, it must necessarily perform poorly in another. But more than formally stating the theoretical impossibility of the existence of an all-purpose optimization strategy, no free lunch theorems can also serve as means to inspire designers to improve the performance of a current algorithm by indirectly emphasizing the importance of incorporating problem-specific knowledge. This passage introduces the aforementioned theorems:

“A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of ‘no free lunch’ (NFL) theorems are presented which establish that for any algorithm, any elevated performance over one class of problems is offset by performance over another class. These theorems result in a geometric interpretation of what it means for an algorithm to be well suited to an optimization problem. Applications of the NFL theorems to information-theoretic aspects of optimization and benchmark measures of performance are also presented.” [9, p. 67]

Having defined the main goal and presented the fundamental concepts used herein, it is worth specifying the goals and logical structure of this work:

1. Characterization of flow shop scheduling and its challenges
2. Introduction of fundamental and state-of-the-art solution methods
3. Creation of algorithms capable of mitigating the complexity of the problem and explanation of their rationale
4. Evaluation the proposed solutions and means of improvement

Table 1.1: Goals and logical structure

Main goal:

- Create algorithms capable of mitigating the computational complexity of the flow shop scheduling problem.

Secondary objectives:

1. Characterize the problem:
- Complexity and relevance of the flow shop scheduling problem.
- Core question:
- What are the characteristics of the problem and why it is so challenging to obtain a solution?
- Reference: second chapter.

2. Introduce fundamental and state-of-the-art solution approaches:
- Integer linear programming:
- Fundamental methods: cutting plane methods and branch and bound.
- State-of-the art methods: branch and cut and related approaches.
- Heuristics, metaheuristics and theoretical aspects:
- Fundamental methods: dispatching rules, heuristics and metaheuristics.
- State-of-the-art methods: hybrid metaheuristics, NEH heuristic. o Main theoretical advance: no free lunch theorems.
- Core question:
- How others have solved the flow shop scheduling and other discrete optimization problems in the past and what are the expected limitations?
- Reference: third chapter.

3. Develop solutions and explain their rationale:
- ΛGA:
- Customized genetic algorithm with embedded discrete event simulation.
- AHGA1 and ΛHGA2:
- Hybrid methods that expand the previous framework by incorporating local search.
- Rationale:
- Overcoming the negative implications related to NFL theorems.
- Core question:
- How to create solutions that will deliver good performance?
- Reference: forth chapter.

4. Evaluate the proposed solution and discuss means of improvement:
- Quantitative analysis:
- Proposed algorithms invariably converged to the exact global optimum.
- Worst-case scenario for ΛGA: evaluation of 20.83% of total feasible schedules.
- Worst-case scenario for ΛHGA1 and ΛHGA2: evaluation of 5.96% of total feasible schedules.
- Data indicates a substantial reduction of the computational burden related to the flow shop scheduling problem.
- Kaplan-Meier analysis.
- Performance differences observed between ΛGA and ΛHGA1 are statistically significant at any conventional level.
- There is no difference between the performances of the hybrid variants.
- Qualitative analysis:
- As expected, the performance of the proposed algorithms is intrinsically related to the amount of knowledge they incorporate.
- Improvements and future research :
- Changes to the current framework using iterated greedy algorithm.
- Core question:
- What is the performance of the proposed solutions and how can they be improved?
- Reference: fifth chapter.

2. FLOW SHOP SCHEDULING: PROBLEM CHARACTERIZATION

In the first volume of the Handbook of Combinatorics, Grötschel and Lovász [10] define optimization as the process of “finding the maximum or minimum of a certain function, called objective function defined on some domain” [10, p. 1543]. However, it is often the case that a particular domain cannot admit continuous values for logical reasons but involves the selection of a determined optimal among a finite discrete set of elements, constituting what is usually called discrete, combinatorial or integer optimization. Nemhauser and Wolsey [11] discuss the distinctive characteristics of this type of problem and its main uses:

“The distinguishing feature of discrete, combinatorial, or integer optimization is that some of the variables are required to belong to a discrete set, typically a subset of integers. These discrete restrictions allow the mathematical representation of phenomena or alternatives where indivisibility is required or where there is not a continuum of alternatives.

Discrete optimization problems abound in everyday life. An important and widespread area of applications concerns the management and efficient use of scarce resources to increase productivity. These applications include operational problems such as the distribution of goods, production scheduling, and machine sequencing. They also include planning problems such as capital budgeting, facility location and portfolio selection, and design problems such as telecommunication and transportation network design, VLSI circuit design and the design of automated production systems.” [11, p. vii]

Specifically in regard to flow shop scheduling, this type of problem can also be considered a case of discrete optimization due to the natural requirement for integer variables [12] [13]. However, that does not mean that all instances related to this kind of sequencing are inherently intractable, because polynomial time algorithms exist for 1) makespan minimization for the two-machine environment, and 2) the proportionate flow shop. Unfortunately, all other cases of the problem are known to be NP-complete, meaning that there is no known polynomial time non enumerative algorithm capable of rendering optimal schedules. Providing evidence of the challenging nature of the problem, Garey, Johnson and Sethi characterize the NP-completeness of the permutation flow shop in the article ‘The complexity of flow shop and job shop scheduling’ published in 1976 in the Journal Mathematics of Operations Research, Volume 1, Issue 2:

“VP-complete problems form an extensive equivalence class of combinatorial problems for which no non-enumerative algorithms are known. Our first result shows that determining a shortest-length schedule in an m-machine flow shop is VP-complete for m ≥ 3. (For m = 2, there is an efficient algorithm for finding such schedules.) The second result shows that determining a minimum mean-flow-time schedule in an m- machine flow shop is VP-complete for every m ≥ 2. Finally we show that the shortest- length schedule problem for an m-machine job shop is VP-complete for every m ≥ 2. Our results are strong in that they hold whether the problem size is measured by number of tasks, number of bits required to express the task lengths, or by the sum of the task lengths.” [14, p. 1]

Besides the type of goal, other factors rather than the size of the instance may increase the dimensionality of the flow shop scheduling problem. Assuming that the manufacturing environment is modelled using integer linear programming, including additional characteristics such as sequence-dependent setup times, blocking and reflow can quickly expand the number of integer variables, making the solution of the problem harder to obtain. In any case, Michael Piñedo [15] describes the essential assumptions regarding the classic permutation flow shop model as follows:

“In many manufacturing and assembly facilities each job has to undergo a series of operations. Often, these operations have to be done on all jobs in the same order implying that the jobs have to follow the same route. The machines are then assumed to be set up in series and the environment is referred to as a flow shop. The storage or buffer capacities in between successive machines may sometimes be, for all practical purposes, virtually unlimited. This is often the case when the products that are being processed are physically small (e.g., printed circuit boards, integrated circuits), making it relatively easy to store large quantities between machines. When the products are physically large (e.g., television sets, copiers), then the buffer space in between two successive machines may have a limited capacity, causing blocking. Blocking occurs when the buffer is full and the upstream machine is not allowed to release a job into the buffer after completing its processing. If this is the case, then the job has to remain at the upstream machine, preventing a job in the queue at that machine from beginning its processing. [15, p. 151]

Complementing the remarks made by Piñedo [15], Gupta et al [16] exhaustively describe this particular manufacturing environment by listing assumptions concerning jobs, machines and operational policies. This set of postulates can be seen in the coming table:

Table 2.1: Permutation flow shop assumptions

Assumptions concerning jobs

J1. Each job is released to the shop at the beginning of the scheduling period.
J2. Each job may have its own due date which is fixed and is not subject to change.
J3. Each job is independent of each other.
J4. Each job consists of specified operations, each of which is performed by only one machine.
J5. Each job has a prescribed technological order which is the same for all jobs and is fixed.
J6. Each job (operation) requires a known and finite processing time to be processed by various machines. This processing time includes transportation and setup times, if any, and is independent of preceding and succeeding jobs. J7. Each job is processed no more than once on any machine.
J8. Each job may have to wait between machines and thus in-process inventory is allowed.

Assumptions concerning machines

M1. Each machine center consists of only one machine; that is, the shop has only one machine of each type.
M2. Each machine is initially idle at the beginning of the scheduling period.
M3. Each machine in the shop operates independently of other machines and thus is capable of operating at its own maximum output rate.
M4. Each machine can process at most one job at a time. This eliminates those machines that are designed to process several jobs simultaneously like multi-spindle drills.
M5. Each machine is continuously available for processing jobs throughout the scheduling period and there are no interruptions due to breakdowns, maintenance or other such causes.

Assumptions concerning operating policies

P1. Each job is processed as early as possible. Thus, there is no intentional job waiting or machine idle time.
P2. Each job is considered to be an indivisible entity even though it may be composed of a number of individual units. P3. Each job, once accepted, is processed to completion; that is, no cancellation of jobs is permitted.
P4. Each job (operation), once started on a machine, is completed to its completion before another job can start on that machine; that is, no preemptive priorities are assigned.
P5. Each job is processed on no more than one machine at a time. (This is a result of assumptions J5 and P2.)
P6. Each machine is provided with adequate waiting space to allow jobs to wait before starting their processing.
P7. Each machine is fully allocated to the jobs under consideration for the entire scheduling period; that is, machines are not used for any other purpose throughout the scheduling period.
P8. Each machine processes jobs in the same sequence. That is, no passing or overtaking of jobs is permitted.

Having explained the assumptions and stated that there is no known efficient algorithm capable of providing an exact solution for flow shop scheduling, except for the few aforementioned cases, it is worth evidencing the substantial computational burden associated with the problem. In this realm, assuming that the abovementioned postulates hold, it is easy to perceive by briefly analyzing the data below that the number of feasible schedules grows at a factorial pace according to the number of jobs present in a given instance. Taking the relations established between the input and the output of the problem into account, the overall time complexity can be then formally expressed as O (n!) in asymptotic notation.

Table 2.2: Flow shop scheduling complexity

illustration not visible in this excerpt

Clarifying some of the complexity theory concepts mentioned so far, an algorithm is said to be efficient if the number of steps required to obtain a solution grows at the most as a polynomial function of the size of the input, and inefficient when this ratio is exponential or factorial. Moreover, it is important to affirm that the time complexity denoted by asymptotic notation usually refers to a worst-case scenario. For instance, the simplex method proposed by Danzig performs well the vast majority of times. However, in the anomalous counterexample provided by Klee and Minty [17] the aforementioned algorithm ends up visiting all of the vertexes of the polyhedron, implying that the worst-case running time can actually occur even when the method is normally efficient. Similarly, branch and bound relates to an exponential procedure but is able to curtail enumeration in many empirical settings. Apart from the differences between worse and average cases, a chart and a table that summarize the abovementioned concepts are displayed below:

illustration not visible in this excerpt

Figure 2.1: Input output relations

Table 2.3: Asymptotic notation

illustration not visible in this excerpt

Recalling what was previously stated, the flow shop scheduling problem admits a polynomial time solution if the number of machines is equal to two. Seeing that this type of result is obtained using the Johnson’s rule, the goals of the next section are to solve a practical example and to demonstrate how sequencing affects the makespan by either improving or hampering the flow of jobs present in the system.

2.1. Practical implementation: Johnson’s algorithm

In view of the abovementioned objectives, this section initiates with a table that provides all the parameters required to solve the permutation flow shop scheduling problem: 1) number of jobs, 2) number of machines, and 3) processing times.

[illustration not visible in this excerpt] denotes makespan minimization of a permutation flow shop consisting of two machines

illustration not visible in this excerpt

As the estimates located on the coming figure will demonstrate, the makespan for the initial sequence j1, j2, j3, j4, j5, j6 is equal to five hundred and twenty time units. However, this arrangement of jobs is not optimal, since it does not promote the uninterrupted circulation of tasks. On the contrary, the schedule depicted below clearly evidences unforced idleness of jobs and machines.

illustration not visible in this excerpt

Table 2.4: Problem data

illustration not visible in this excerpt

Figure 2.2.1: Makespan for sequence J1,J2,J3,J4,J5

Considering that the deployment of the jobs in the two machines is subjected to improvement, a common strategy to reduce unforced idleness in permutation flow shops is to minimize the makespan. Since this goal can be achieved using Johnson’s rule, this algorithm will be executed herein using the following instructions:

1) From the table containing the processing times, find the task with the shortest duration and highlight it for the sake of simplicity. As can be easily verified, this value refers to fifty time units.

Table 2.4.1: Johnson’s algorithm step one

illustration not visible in this excerpt

2) Having completed step one, create a horizontal table with the corresponding number of jobs. Due to the fact that the highlighted data is situated on the coordinate j5m2, this consists of inserting job j5 in the first available slot in the left. Further explaining the mechanism proposed by Johnson, if the task with the lowest processing time happened to be located under machine one, the job would have to be placed in the first available slot on the right.

illustration not visible in this excerpt

Figure 2.2.2: Johnson’s algorithm step two

3) Remove j5 from the table containing the processing times and again find the task or set of tasks with the smallest duration in the remaining set. Clearly, the minimal processing of sixty time units can be found in the cells j2m1 and j1m2. Seeing that the highlighted coordinates relate to different machines, they do not constitute a tie but can be managed through the standard job placement procedure. As a result, the next step simply involves inserting j1 into the first available slot on the left and j2 in the first on the right.

Table 2.4.2: Johnson’s algorithm step three

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 2.2.3: Johnson’s algorithm step three

After completing step three, discard jobs ji and j5, then repeat the procedure of finding the task or tasks with the least duration. As can be verified in the coming table, the minimum processing time of eighty time units refers to cells j3mi and j4mi. Since these repeated values refer to the same machine (mi), there is tie. In the direction, the stalemate can be broken arbitrarily by placing jobs three and four in any remaining positions in the sequence. This move yields two optimal solutions, denoted next:

Table 2.4.3: Johnson’s algorithm final step

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 2.2.4: Johnson’s algorithm optimal solutions

illustration not visible in this excerpt

Figure: 2.2.5: Comparison between optimal and initial sequences

Having defined the assumptions concerning permutation flow shops and demonstrated how optimizing makespan promotes an ideal flow of jobs, the next section will discuss the relevance of the problem in the wider domain of computer science and operations research.

2.2. Relevance of the flow shop scheduling problem

To finalize this characterization, it is worth explaining the reasons that make the flow shop sequencing problem as relevant today as it was at the dawn of operations research. While it is relatively easy to understand that improved schedules constitute an important practical tool in manufacturing, the implications of finding an efficient algorithm for this type of problem greatly surpass the gains related to increased productivity, and can actually provoke a revolution in the field of computer science and operations research.

However, in order to properly clarify the impacts that may arise from the act of obtaining a polynomial time solution for flow shop scheduling and other challenging problems, it is necessary to introduce the concept of NP-completeness [18]. In attempting to demonstrate the meaning of this idea, it has to be considered that a computer scientist often cannot come up with an efficient algorithm to solve a determined problem. Taking this difficulty into account, it is impossible to prove without a shadow of doubt that the desired polynomial time solution exists or not. To circumvent this uncertainty, this researcher has to evidence that finding an efficient algorithm refers to an improbable task. The only means of doing this is to formally demonstrate that the structure of the instance at hand is equivalent to a problem that is deemed hard through a mathematical proof called reduction. In this realm, a given instance is called NP-complete if the aforementioned procedure can be performed in polynomial time.

Specifically talking about flow shop scheduling, this type of sequencing falls into the NP-complete category due to its similarity to the 3-partition problem. As can be deduced from the last sentence, one practical consequence of this equivalency is that finding a method able to efficiently tackle the 3-partition would automatically lead to a polynomial time solution for the flow shop scheduling problem and vice-versa. But more than that, in the late 1960s, the American mathematician Stephen Cook [19] and the Ukrainian computer scientist Leonid Levin [20] proved almost simultaneously and without knowledge of each other’s work that Boolean Satisfiability is NP- complete, meaning that all problems in NP can be reduced to SAT polynomial time. The implication of this thought-provoking theorem is that if someone finds an efficient solution method for Boolean Satisfiability or any problem that is known to be NP-complete, such as flow shop scheduling, the whole class would cease to exist. As a result, fast polynomial time solutions would be available to a series of relevant problems. To complement this explanation, the following table elucidates the meaning of the aforementioned acronyms and summarizes the main ideas related to them:

illustration not visible in this excerpt

Table 2.5: Complexity classes

Aside from the intricacies related to complexity theory, NP-complete problems do not necessarily need to involve austere mathematical dilemmas. On the contrary, they can also encompass recreational activities such as Sudoku, Mastermind and Candy Crush [22] [23] [24] [25]. After all, it is not the level of formalization that makes these puzzles hard, but simply the fact that while it is easy to verify a correct solution, these games become exponentially or factorially more difficult according to the size of the input, as normally occurs in any other hard problem. Anyhow, the notion that many NP-complete problems actually make up part of the daily lives of many people gives some hope that at least one of them will be solved efficiently.

Besides affirming that the many challenging problems share a common difficulty, Mathematics is far from achieving the point at which there are no boundaries between quickly recognizing problems and solving them. It can even be the case that the classes of polynomially solvable and hard problems are proven to be irremediably separated, as the inequality P^NP denotes. Looking on the positive side and taking into account the current state of theory, it is difficult to deny that obtaining good solutions to hard problems, such as flow shop scheduling, has become a more

manageable task with the advances in computer science, integer linear programming, heuristics, metaheuristics and hybrid methods. This statement is widely supported by several empirical papers that almost invariably point out to increased accuracy and reduced time in the execution of algorithms. In addition, a relatively new set of theorems and mathematical proofs has substantially clarified the basis on which these approaches function in practice. In the worst case, it can be said with a good degree of confidence that finding an algorithm capable of mitigating the computational complexity of flow shop scheduling could possibly provide insights on how to tackle discrete optimization problems that have a similar combinatorial structure.

3. FLOW SHOP SCHEDULING: SOLUTION METHODS

In the book “Flow Shop Scheduling: Theoretical Results, Algorithms and Applications”, Hamilton Emmons and George Varaiktarakis [26] assert in consonance with the works of Miloš Seda [27] and Michael Piñedo [15] that the most common methods used to obtain an exact solution for the ^-machine flow shop scheduling problem are: 1) the branch and bound algorithm, 2) integer linear programming. While progress on using branch and bound can be viewed as a merely incremental affair [26], the field of integer linear programming has witnessed a series of encouraging developments, especially after the establishment of branch and cut methods. In their article ‘Progress in Linear Programming-Based Algorithms for Integer Programming: An Exposition’, Johnson, Nemhauser and Salvesbergh [28] summarize the achievements that have occurred in this area of mathematical optimization:

“In the last decade, the use of mixed integer programming models has increased dramatically. Fifteen years ago, mainframe computers were required to solve problems with a hundred integer variables. Now it is possible to solve problems with thousands of integer variables on a personal computer and obtain probably good approximate solutions to problems such as set partitioning with millions of binary variables. These advances have been made possible by developments in modeling, algorithms, software, and hardware. This paper focuses on effective modeling, preprocessing, and the methodologies of branch and cut and branch and price, which are the techniques that make it possible to treat problems with either a very large number of constraints or a very large number of variables.” [28, p. 2]

Even though having been remarkably successful in solving large instances of several important problems, branch and cut algorithms can be applied only to a limited extent in flow shop scheduling [27] [29]. Trying to explain this issue in a simple manner, the usefulness of these cutting-edge techniques relies on certain circumstances that facilitate curtailing enumeration such as the absence of symmetries, presence of strong cuts and possibility of tightening the formulation. Contrary to these requirements, the flow shop scheduling problem often contains symmetries [29], tends to present multiple equivalent solutions, is formulated using terms of Big M constraints and does not count with strong cuts. As a result of these factors, the use of integer linear programming is usually confined to smaller instances due to lack of progress in pruning.

Considering the limitations of integer linear programming, a range of heuristic and metaheuristic methods have been developed since the 1950s. They remain relevant insofar as the flow shop problem due to their capacity to provide good schedules in a tolerable amount of time. Neelam Tyagi1, R.G. Varshney and A.B. Chandramouli explain the importance of these approaches in the article ‘ Six decades of flow shop scheduling research’:

“The flow shop scheduling problem is categorized as an NP hard problem and it means development of heuristic and meta-heuristic approaches to solve it is well justified. Several methods have been used to solve combinatorial optimization problem but each having its own limitation and advantages. (...) Several heuristic approaches for the flow shop scheduling problem are developed. In recent years, metaheuristic approaches, such as simulated annealing, tabu search, and genetic algorithms, ant colony optimization algorithms etc. have become very desirable in solving combinatorial optimization problems because of their computational performance.” [30, p. 2]

Having evidenced the relevance of heuristics and metaheuristics, it is important to call attention to the fact that these methods present substantial differences in terms of pertinence, precision and running time. For instance, some heuristics such as NEH present excellent performance, but its effectiveness is restricted to permutation flow shops. Acknowledging this diversity, the list below explains the advantages and disadvantages of the main classes of methods used to solve the flow shop scheduling problem [15]:

1) Approximation algorithms:
a. Relate to the methods that provide a known (but not necessarily favorable) degree of approximation to the optimal solution of a specific problem supported by a formal mathematical proof.
b. Involve algorithms that can be executed in polynomial or fully polynomial time.
c. Even though approximation algorithms can be employed in a number of scheduling problems, their usefulness in flow shop settings is very limited.

2) Dispatching rules:
a. Refer to several types of priority rules that can be used in scheduling problems.
b. Invariably consist of procedures that can be executed in polynomial time.
c. Although these algorithms can be optimal under certain circumstances, their practical deployment is not recommended in flow shop settings due to poor performance.

3) Heuristics:
a. Encompass a class methods that attempt to provide a good, but not necessarily optimal solution to a given problem.
b. Involve approaches that are based upon the plausibility of a solution.
c. Unlike metaheuristics, the usefulness of heuristic methods is restricted to specific problems.
d. As regards flow shop scheduling, the best-performing and most extensively tested heuristics is a constructive algorithm called NEH proposed in 1983 by Nawaz, Enscore and Ham. Briefly put, this heuristic returns makespan close to three percent of the optimal with the polynomial time complexity of 0(n[3] x m).

4) Metaheuristics:
a. Refer to a broader type of heuristic that can be a priori used in a wide range of optimization problems.
b. Involve a general framework that is many times associated with some natural phenomenon.
c. Convergence may or not be supported by a formal mathematical proof. Some methods such as genetic algorithms and simulated annealing are guaranteed to eventually reach an exact global solution optimum under relatively mild assumptions.
d. Convergence time is often unknown due to the stochastic characteristics of many of these methods.
e. With respect to the flow shop scheduling problem, Emmons and Varaiktarakis [26] affirm that the most relevant metaheuristic methods are simulated annealing, genetic algorithms and tabu search.
f. Another possible approach to obtain a solution to the flow shop scheduling problem is to combine GA’s with local search or other metaheuristic algorithm. This class of solutions is called memetic or hybrid genetic algorithms.

illustration not visible in this excerpt

Figure 3.1: Common methods used to solve the permutation flow shop problem

Even though the literature is centered on the techniques summarized above, Raidl [31] affirms that metaheuristic methods can be combined with the following approaches with the aim of facilitating convergence: 1) other metaheuristic methods; 2) problem-specific algorithms; 3) artificial intelligence approaches; 4) general techniques originated from the field of Operations Research. The table below exemplifies possible combination strategies:

Table 3.1: Combination strategies for metaheuristics

illustration not visible in this excerpt

In brief, the flow shop scheduling problem can be solved in a variety of ways and this relatively short review merely provides a general outline about the subject. Taking this limitation into account, the next section will attempt to deepen the methodological discussion by focusing on three main frameworks: integer liner programming, evolutionary strategies, and the NEH heuristic.

3.1. Advances in integer linear programming

As previously mentioned, the advances in integer linear programming have been remarkable. However, to explain this progress, it is necessary to consider that the current state-of- the-art integer linear programming algorithms remain largely dependent on the enumerative framework established by the branch and bound, meaning that most of the mathematical progress geared toward the reduction of time that is required to process a given problem can be credited to mathematical procedures that operate in the direction of curtailing enumeration. Typically, the attempts to reduce the computational burden related to branch and bound involve incorporating techniques such as problem preprocessing and plane cutting methods. Even though variations exist depending on the type of solver technology, the basic framework of cutting-edge integer linear programming algorithms generally involves the following procedures [32]:

1) Preprocessing equations by attempting a series of methods that seek to reduce the size of the matrix, such as eliminating redundant variables.
2) Checking the feasibility of the problem.
3) Solving the initial relaxation.
4) Attempting to tighten the formulation using preprocessing techniques that are specifically designed for integer programs.
5) Using one or several plane cut generation approaches prior to the execution of the branch and bound algorithm. This method is called cut and branch.
6) Using one or several cutting plane generation techniques directly on the branch and bound tree node prior to pruning and branching. This approach, called branch and cut, is considered by many to be one of the main recent advances in the field of combinatorial optimization.
7) Employing the branch and bound framework along with the column generation procedure. This type of algorithm is called branch and price and is most frequently used in large-scale optimization problems.
8) Attempting to use some heuristic such as relaxation induced neighborhood search at several stages of the optimization.
9) Adopting violated cuts along with branch and bound and column generation in a procedure called branch, cut and price.

As seen above, solving integer linear programming problems with current technology involves such an extensive range of different procedures that it is not viable to discuss all of them here. Therefore, the theoretical referential related to the subject will concentrate on the less ambitious goal of elucidating the mathematical principles that explain how optimization software actually works. Taking this goal into account, the work adopts the strategy of combining the theoretical aspects with a series of expanded examples taken from: Chen, Batson and Dang’s Applied Integer Programming: Modeling and Solution [32], G. Srinivasan’s ‘Operations Research, Principles and Applications’ [33] and John E Mitchel’s chapter in “Handbook of Applied Optimization” [34]. Besides these illustrations, completely original cases that encompass essential heuristic and metaheuristic approaches, such as NEH and genetic algorithms, will also be explained in a similar manner.

3.2.Integer linear programming: suitable problem structures

Seeing that success in obtaining a meaningful result using integer linear programming is intrinsically related to curtailing enumeration, this section will try to briefly explain the occasions on which this technique constitutes a suitable option. Notwithstanding the prescriptive connotation of the task, the indications depicted herein do not seek to be exhaustive and ultimately, the efficiency of a determined method must be verified empirically.

Apart from this initial warning, an essential but often disregarded aspect concerning the pertinence of integer linear programming is whether is adequate to model a given problem using its typical algebraic formulation. Here, two essential assumptions concerning this approach must be taken into account: the first is linearity and the second is certainty [32]. From a pragmatic point of view, linearity is not particularly restrictive, because it is possible to circumvent this issue with piecewise functions. However, there is no mathematical work-around for the assumption of certainty, and it is often the case that this supposition is not particularly realistic. For instance, in machine scheduling problems, the principle of certainty may be sufficient to portray highly automated manufacturing environments but inadequate to represent settings that involve manual operations. As several computational experiments have demonstrated, if the strategies used to curtail the exponential expansion of the branch and bound tree turn out to be successful, the size of the instance cannot be considered the most important factor related to the adequate performance of modem integer linear programming algorithms [28]. On the contrary, it can be the case that adding variables and constraints to the problem actually improves the running time by either tightening the bounds of the problem or by eliminating significant portions of the branch and bound tree. Still, the abovementioned procedures need to be conducted with parsimony, because the addition of violated cuts can worsen the performance of the simplex algorithm by constantly increasing the size of the input matrix. Besides parsimony, the good use of cutting plane methods is connected to the act of finding a suitable and preferably strong valid inequality. For instance, problems with well-defined structures such as flow-conservation constraints in the network flow problem can take advantage of valid inequalities that are able to eliminate entire facets of the integer convex hull, while more generic instances can at the most benefit from cuts able to separate one or few fractional solutions.

Another important remark is that even though the branch and bound framework is almost pervasive as regards solving problems with integer linear programming, there are some cases in which this procedure does not need to be used. With some luck, the areas related to the integer convex hull and relaxation may be the same, meaning that given these circumstances it is not necessary to resort to any enumerative tactic to solve the problem. Apart from good fortune, enumerative procedures are not required when the input matrix is square with a determinant of one or minus one, a mathematical property that is technically termed total unimodularity [35]. The list below attempts to summarize the factors that may facilitate or hamper the use of integer linear programming:

Use of integer linear programming is favorable

- Problem input consists of a total unimodular matrix.
- Area bounded by the relaxation is the same as the integer convex hull.

Use of integer linear programming is facilitated

- Problem structure allows the use of strong cuts.
- Problem structure is favorable to improved formulations.
- Integrality gap is small.
- Binary integer linear programs.
- Problem allows the addition of strong valid inequalities.

Use of integer linear programming is not recommended

- Number of integer variables present in the instance is big and methods used to curtail enumeration are not sufficient.
- Problem structure contains symmetry.
- Violation of integer linear programing assumptions.
- The relaxation is unfeasible or is taking too long to be calculated.
- Big M constraints are present.

After briefly explaining the conditions in which integer linear programming is suitable, the next section will provide a detailed explanation about the topic.

3.3.Integer linear programming: branch and bound method

Proposed by Land and Doig in 1960 [36], the branch and bound algorithm was not the first approach used to obtain the exact solution of an integer linear problem, but it became widely used due to its ability to converge faster to the optimal solution, mainly when compared to previous methods such as the complete enumeration of all integer solutions and the fractional cutting plane technique. The rationale behind this approach is not exceedingly complex. Basically, the algorithm implicitly searches the entire space of integer solutions by structuring the problem into a decision tree that is able to discard paths for the reason of infeasibility or inferiority of a given solution. As can be deduced from this explanation, this approach is much more sensible than checking all possibilities by brute force. On the other hand, the method presents a significant deficiency related to the growth rate of the decision tree. This issue arises from the notion that the aforesaid object grows exponentially depending on the number of integer variables. Besides this handicap, there are no guarantees that the branches located at the root of the problem will be eliminated first, thus, it may not always be possible to discard a substantial number of subproblems. Moreover, it can be even the case that all the subproblems must be evaluated.

3.3.1. Branch and bound method: practical implementation

To illustrate the ideas discussed in this section, the next part of this work will introduce a commented example on the branch and bound algorithm, along with the pertinent Matlab code. Hopefully this expedient should be sufficient to clarify all the details that deal with the practical execution of the algorithm, such as what steps can be taken arbitrarily, and how to manage ties and assign priorities to certain paths using the best-bound-first rule. To begin this endeavor, the procedure starts with the solution of the problem as if there were no integer constraints, a process called relaxation in mathematical jargon:

illustration not visible in this excerpt

Table 3.2: Branch and bound Matlab code

Table 3.2: Branch and bound Matlab code

illustration not visible in this excerpt

After solving the initial relaxation, an initial upper bound for the problem is obtained. However, this answer is not satisfactory, taking into account that the integrality condition for variables y1 and y2 is violated. Acknowledging the need for correction, it is necessary to add constraints that in this particular method always consist of immediate upper and lower bounds regarding either of these variables. Once this task can be performed arbitrarily, y1 is selected. Seeing that y1 has a value of approximately 0.54 and must be greater than or equal to zero, the equations y1=0 and y1>=1 must be added to the problem as follows:

Table 3.2.1: Nodes two and three

illustration not visible in this excerpt

According to the above table, violations of integer constraints persist, implying that the algorithm must proceed. Using the criteria called the best-bound-first rule, the path of greatest optimal value is prioritized. As a result of this strategy, the procedure continues with a further iteration that departs from the second node to the following nodes:

Table 3.2.2: Nodes four and five

illustration not visible in this excerpt

The data above show that node five is eliminated for unfeasibility, implying that it is no longer necessary to investigate this space. Moreover, it can be observed that the optimal value for node four is still greater than node three and that there are fractional values in the solution for subproblem four. Given these circumstances, the branch and bound procedure must advance from the remaining feasible node once again according to the best-bound-first rule:

Table 3.2.3: Nodes six and seven

illustration not visible in this excerpt

Seeing that all required variables are integer in node six, the overall problem has a candidate for an optimal solution. Moreover, the algorithm halts in this location and a lower bound for the overall problem is established. Besides this occurrence, the iterations performed so far show that there is a tie for the optimal value for node seven and node three. Since there is no rule concerning this matter, this stalemate can be broke arbitrarily, and thus the algorithm now departs from subproblem seven to the subsequent neighborhoods:

Table 3.2.4: Nodes eight and nine

illustration not visible in this excerpt

As the solution develops, it can be grasped from the preceding iteration that subproblem nine is unfeasible and therefore the algorithm is interrupted at this spot. In addition, node eight displays an optimum that no longer constitutes the upper bound for the problem. As a result of this change, the algorithm moves back to node three and these subproblems are then split into nodes ten and eleven to restore the integrality condition of variable y2. Similarly to the previous steps, this violation is solved by adding the constraints y2<=5 and y2>=6:

Table 3.2.5: Nodes ten and eleven

illustration not visible in this excerpt

Reducing the space of the search further, Table 3.2.5 denotes that node eleven is branched for the reason of unfeasibility. Considering that subproblem ten continues to represent of the best upper bound of the problem and that there are still fractional values that need to be adjusted, the procedure unfolds as shown regarding nodes twelve and thirteen:

Table 3.2.6: Nodes twelve and thirteen

illustration not visible in this excerpt

The last table shows that node twelve does not violate any integer constraints and therefore constitutes a new potential optimal solution. For the reason that the optimal value obtained in this subproblem is higher than the previous candidate located on node six, the lower bound related to the overall problem is updated. As a consequence of this revision, and taking into account that node thirteen presents a solution that is inferior to the new lower bound, this branch is pruned and the procedure unfolds as follows:

Table 3.2.6: Nodes twelve and thirteen

illustration not visible in this excerpt

In this last stage of the iterations, node fourteen is eliminated due to the violation of the updated lower bound. Furthermore, node fifteen is discarded due to unfeasibility. Since there are no longer any viable nodes, the problem is finally optimized at node twelve with the value of nine and sixty- seven hundredths [32].

illustration not visible in this excerpt

Figure 3.2: Branch and bound practical implementation tree

Having demonstrated how the branch and bound method works, the next part of this work will continue the discussion by introducing fractional cutting plane methods.

3.4. Fractional cutting plane method

In spite of the diversity of approaches used to solve integer linear programming problems, the fractional cutting plane algorithm is included here as a basis of other methods and as an example that portrays the negative consequences of resorting to techniques that rely on weak inequalities. Briefly explained, the method departs from the simplex table derived from the problem relaxation and successively introduces inequalities until all the necessary fractional values are eliminated. In order to perform this task, a new constraint and a new variable enter the basis. Even though the fractional cutting method makes sense from an intuitive perspective, it presents two interconnected deficiencies that severely impair the performance of the algorithm. As will be discussed in further detail later, the cuts are weak, meaning that many iterations are usually required to obtain an optimal solution. In addition, by introducing one new variable and constraint at every iteration, the Gomory slack variable actually increases the size of the simplex table, making the process of convergence increasingly slower. Rather than what has been explained, the example below attempts to illustrate all the considerations made in this section about cutting plane methods, including of course how the valid inequality is generated in practice:

3.4.1. Fractional cutting plane method: practical implementation

illustration not visible in this excerpt

Step 1: Obtain an optimal simplex table

Table 3.3: Fractional cutting plane method step one

illustration not visible in this excerpt

Besides providing an initial upper bound to the problem, the optimal simplex table above denotes the violation integer constraints x1 and x2. For this reason, it is necessary to add a constraint consisting of a fractional cut. Since it is possible to choose any of these variables arbitrarily, x2 will be selected and the inequality will be constructed as follows:

illustration not visible in this excerpt

Step 3) Introduce the fractional cut into the simplex table:

Table 3.3.1: Fractional cutting plane method step three

illustration not visible in this excerpt

As can be seen in step two, the Gomory cut is originated from a greater or equal inequality. Besides this brief comment, it is worth to mention that X5 is a basic variable with a contribution of zero. Having quickly clarified the meaning of the previous iteration, the algorithm proceeds as shown below:

Step 4) Perform a simplex iteration:

Table 3.3.2: Fractional cutting plane method step four

illustration not visible in this excerpt

Once more, a clear violation of the integer constraint xi is detected, meaning that the procedure must continue with the generation of another cut. This is still not the only issue with the simplex table above. Taking into account that the fractional cutting plane method is an all integer algorithm, all basic variables, including x4, must be integers.

illustration not visible in this excerpt

Step 6) Introduce the fractional cut into the simplex table:

Table 3.3.3: Fractional cutting plane method step six

illustration not visible in this excerpt

Table 3.3.4: Fractional cutting plane method step seven

illustration not visible in this excerpt

Seeing that there are no more fractional basic variables in the simplex table, the fractional cutting plane algorithm halts, yielding an optimal value of four with x1=2 and x2=2. Once more, it is necessary to emphasize that introduction of the Gomory slack variable at every iteration increases the size of the simplex table. As a consequence of this characteristic, not to mention the weakness of the cuts, the convergence process gets increasingly slower. Therefore, the use the fractional cutting plane method as a standalone tool should be preferably be restricted to simpler integer linear programming instances.

3.5. Cutting plane methods

As is typical in other approaches, cutting plane methods involve adding constraints to an integer linear problem with the goal of obtaining an optimal solution or at least facilitating attaining this objective. Notwithstanding the differences regarding how these constraints are generated, these must invariably consist of violated cuts. To understand exactly what this term means, it is necessary to resort to some preliminary definitions. Chen, Batson and Dang [32] point out that a valid inequality refers to a mathematical expression that satisfies all feasible solutions of an integer linear problem, denoting that the constraints generated by cutting plane methods should not exclude feasible integer solutions under any circumstance. Of course, this statement is also valid for the mixed integer linear programming problems, but in this case these inequalities should also satisfy the fractional constraints. The graphic below attempts to illustrate these definitions in a more tangible manner [32]:

illustration not visible in this excerpt

Figure 3.3: Valid inequalities

A brief examination of the graphic shows that inequality d) cannot be considered valid since it excludes the feasible integer solution denoted by the dots. From the remaining objects, lines a), b) and c) denote valid inequalities. However, it should be noticed that a) does not refer to a violated cut since it does not cross the feasible solution area marked in gray. In addition, it is clear that cut c) is stronger than b) for the reason that the area delimited by b) constitutes a subset of the region bounded by cut c).

Apart from the formal definitions, mere reasoning suggests that the inequalities generated from violated cuts should operate as means to make the space located between the wider area bounded by problem relaxation and the convex envelope that contains the true integer optimum as small as possible. However, the desirable characteristic of eliminating large parts of the aforementioned region is rare, meaning that strong cuts are restricted to a few problems that present highly specific structures. In these few particular cases, the inequalities are capable of discarding entire facets of the polyhedron that encompasses the relaxation convex hull. Unfortunately, for less structured problems, the cuts derived from the inequalities are either weak or very weak, implying that they can at most separate infeasible fractional points. Chen, Batson and Dang [32] define the types of valid inequalities as follows:

“Type 1 valid inequalities are generated from pure and mixed IP problems with no special structure. The advantage of this type is that they can always be used to separate a fractional point and can be applied to all IP or MIP problems, but the disadvantage is that the derived cuts are usually very weak. (...).Type 2 inequalities are basically derived from problems with some local structure by considering a single constraint (such as knapsack sets) or a subset of the problem constraints (such as set packing). The inequalities thus derived can at best only separate fractional points that are infeasible to the convex hull of the relaxation. Frequently, type 2 inequalities are facets of the convex hull of the relaxation and should be stronger than type 1 inequalities. (...) .Type 3 inequalities are typically derived from a large part or full set of a specific problem structure such as the flow-conservation constraints in the network flow problem. Usually these inequalities are very strong because they may come from certain known classes of facets of the convex hull of feasible regions. However, their applications are limited to the particular type of problem. These cuts are very useful (...).” [32, pp. 308-309]

Acknowledging the weakness related to generic type I inequalities, the adequate use of cutting plane techniques in integer linear programming depends on the careful evaluation of the structure of a determined instance. In this realm, the table developed by the IBM specialists Klotz and Newman [37] attentively addresses the relations between violated cuts and problem structure in the diagram below:

Table 3.4: Relations between cuts and problem structures

illustration not visible in this excerpt

After introducing the concept of cutting plane methods and explaining their relevance, the next section demonstrates how they can be used in conjunction with the branch and bound technique.

3.6. Branch and cut method overview

As mentioned previously, the branch and cut method is considered by many to be one of the main recent advances in the field of integer linear programming. However, this solution approach does not constitute a radical mathematical innovation, but is constructed upon the branch and bound framework. The following passage explains the basic mechanism of this algorithm:

“Branch and cut is a generalization of branch-and-bound where, after solving the LP relaxation, and having not been successful in pruning the node on the basis of the LP solution, we try to find a violated cut. If one or more violated cuts are found, they are added to the formulation and the LP is solved again. If none are found, we branch.” [38, p. 42]

Although the branch and cut method seems relatively free of complications, the practical efficiency of the algorithm in terms of convergence is intrinsically related to the manner in which these violated cuts are managed. The most obvious measure in this direction is fathoming the node in case the solution provided by the inclusion of valid inequality does not promote sufficient improvement over previous iterations.

Another problem concerning the branch and cut method involves the administration of the number of violated cuts. If an excessive number of valid inequalities is incorporated into the problem, these may slow down convergence due to the extra time that is required to process this new increased formulation. One of the ways to tackle this issue is to discard valid inequalities that do not contribute to the process of convergence by deleting inactive cuts. Other strategies used to circumvent the problem include limiting the number of cuts, skipping certain levels, placing valid inequalities in the upper levels of the decision tree or halting the inclusion of cuts after a certain point. In addition to the aforementioned cut management strategies, practical software implementations of the branch and bound algorithm usually employ this approach in conjunction with other methods, the most important being better formulation by preprocessing and heuristics. To conclude the overview of the branch and cut method, a basic description of the algorithm will be included [32].

Notation:

illustration not visible in this excerpt

Table 3.5: Branch and cut algorithm description

illustration not visible in this excerpt

3.6.1. Branch and cut method: practical implementation

In the same vein as the other examples, the goal of this short illustrative case is to demonstrate the practical execution of the branch and cut algorithm. However, taking into account that the problem in hand presents only two variables, it is useful to include graphical elements that illustrate the interaction between the area bounded by the problem relaxation, the integer hull and the inequalities that are gradually introduced. In addition to this visual part, a new type of cut called Chvátal-Gomory will be also presented along with the due instructions. To begin this task, the initial problem formulation is shown as follows:

illustration not visible in this excerpt

Solve the problem relaxation:

Table 3.6.1: Matlab code for relaxation

illustration not visible in this excerpt

Figure 3.4.1: Branch and cut initial relaxation

Sharing a common step with other approaches, the first stage of the branch and cut method involves estimating the initial relaxation that determines the initial upper bound of the problem. Seeing that fractional values persist, variable x1 is selected arbitrarily and the algorithm will proceed until an optimal integer solution is found. As a way to depict these relationships, a graphic that encompasses the region bounded by the relaxation and the feasible integer solutions is displayed above. For the sake of simplification, the non-binding inequalities x2 < 5 and xt < 10 will be excluded.

Table 3.6.2: Branch and cut, step two, node two

illustration not visible in this excerpt

Table 3.6.3: Branch and cut, step two, node three

illustration not visible in this excerpt

The results from the calculations performed for node two show that the optimum values related to variables xi and X2 do not violate any integrality conditions and therefore a candidate for an optimal solution is found. As a consequence of this preliminary conclusion, a newer bound for the problem is established and iteration halts in this particular node. However, fractional values can still be verified in node three, therefore the algorithm continues from this area to the next iteration.

Step 4) Node 4:

Instead of branching the problem by generating constraints that refer to the immediate upper and lower bound of variable xi, the branch and cut algorithm now attempts to perform a cut in node four. Taking into account that the Chvátal-Gomory cut refers to a valid inequality, it enters the system of equations that encompasses the problem in hand. The following operations show the details of this process:

illustration not visible in this excerpt

Table 3.6.4: Branch and cut node four

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 3.4.2: Branch and cut optimal result compared to relaxation

In the iteration that was just presented, the optimal solution for variables xi and X2 do not violate any integer constraints, a factor that by itself would interrupt the execution of the algorithm at this spot. But more than that, the outcome of the objective function for node four is greater than that obtained in node two, implying that node four must be eliminated from the decision tree for the reason of upper bound. Seeing that the presence viable nodes are exhausted after this last iteration, the problem is optimized at node two with optimum value of minus twenty-eight [34].

illustration not visible in this excerpt

Figure 3.5: Branch and cut tree

To finalize this section, the branch and cut tree above summarizes the procedures made in each node.

3.7. Genetic algorithms:

Contrasting with the formalism of integer linear programming, genetic algorithms (GAs) refer to a metaheuristic inspired by Darwin’s theory of natural selection. In this realm, the canonical version of the algorithm encompasses the establishment of an initial population that evolves toward a certain optimal through successive processes of selection, crossover and mutation. Gen and Chen [39] elucidate the overall mechanism and the terminology related to the subject:

“The genetic algorithm maintains a population of individuals, say P(t), for generation t. Each individual represents a potential solution to the problem at hand. Each individual is evaluated to give some measure of its fitness. Some individuals undergo stochastic transformations by means of genetic operations to form new individuals. There are two types of transformation: mutation, which creates new individuals by making changes in a single individual, and crossover, which creates new individuals by combining parts of individuals. New individuals called offspring C(t), are then evaluated. A new population is formed by selecting the more fit individuals from the parent population and the offspring population. After several generations, the algorithm converges to the best individual, which hopefully represents an optimal or suboptimal solution to the problem.” [39, p. 2]

Notwithstanding the biological character of the approach, GAs are known for obtaining good solutions in a reasonable amount of time, even if the problem at hand involves discontinuities or integer requirements. However, to understand the basis that explains the robustness of GAs, it is necessary to examine the role of the genetic operators as regards facilitating the process of convergence toward an optimal solution and the investigation of the search space.

Starting with the first aspect, convergence is stimulated by the pressure exercised through selection and the usual elitist character of the algorithm. However, in GAs, the process of selection is enhanced by a distinct process called crossover that engenders the inheritance of the best traits from the parents to their offspring. Mimicking the process present in natural life, crossover operates by combining the information encoded in the parent’s chromosomes. As regards the second matter of exploring a determined search space, one of the most relevant characteristic of GAs refers to the establishment and maintenance of a population. Assuming that this group presents sufficient size and diversity, the probability of proper investigation of the feasible area increases. Moreover, at an individual level, the stochastic process of mutation further encourages the diversity of solutions by slightly altering offspring.

3.7.1. Genetic algorithms: parameter settings

Although GA’s structural logic seems to properly address convergence, the practical implementation of this approach necessarily involves setting the genetic components. Unfortunately, the literature about the subject does not provide exact guidelines on how this task should be performed, but gives only empirical rules. One example in this domain refers to the population component. Whereas it is widely accepted that a relatively small population size would lead to premature convergence of the algorithm, there is no definite criterion for the actual figures that should be used. Moreover, population size is not the only parameter of the algorithm that must be set using empirical criteria. Crossover, mutation and selection are also affected by the same challenge.

Still, genetic components do not need to be set up statically but can be configured in a manner that adapts to the problem in hand or to the characteristics of the evolutionary process. As regards the first category, adapting to the context of a problem can occur through a deterministic rule, such as reducing the rate of mutation as generations elapse, or more sophisticatedly, through the process of feedback. The second self-adaptive category is even more refined and involves changes in the structure of the GA. This case encompasses the use of different GA mutation operators according to the success of the procedure.

In addition to this, fuzzy inference systems can be used in a number of genetic operators. The Wang, Wang, Hu approach, for instance [40], consists of two fuzzy controllers that can alter the crossover and mutation ratios according to changes in the average fitness values. The logic of this procedure is straightforward: a) if fitness of the population is stagnated at low level, the crossover and mutation rates should be increased slowly; b) if overall fitness is high, the diversity of the population should be diminished by decreasing the rate of crossover and mutation; c) if average fitness is very close to zero, crossover and mutation should be increased fast. The figure below displays the membership function that was used to transform the crisp values of average function into the categories (NR) negative larger, (NL) negative large, (NM) negative medium, (NS) negative small, (ZE) zero, (PS) positive small, (PM) positive medium, (PL) positive large and (PR) positive larger.

illustration not visible in this excerpt

Figure 3.6: Fuzzy controller for mutation and crossover operators

Besides controlling genetic operators such as crossover, mutation and population size, fuzzy logic can be used to construct multiobjective fitness functions. This approach is particularly useful when the use of techniques such as Pareto optimality results in a large set of noninferior solutions that may require additional computational power and effort from the decision-maker to evaluate all the available options. According to Cheng and Gen [39], production planning problems can be addressed by the establishment of a fuzzy fitness function. This strategy can be pertinent since it is often the case that the management does not seek an optimal value but aspires to a level of satisfaction that is hard to define in precise terms. The conditions favoring the use of a fuzzy objective function are fully explained here:

“In actual production planning problems, however, the available quantity of a resource in a period is uncertain and possesses different types of fuzziness. Also, the objective defined by the decision maker is an ill-defined goal, or there are some fuzzy parameters in the objective function due to the lack of full understanding of the problem or of the actual maximization and minimization limits. We discuss nonlinear programming problems with the following types of fuzzy objectives and fuzzy resource constraints.

1. The objective value that the decision maker desires is not an actual maximum but a fuzzy value. The decision maker aspires to reach a level such as z0, and not less than a lowest level z0-p0. Moreover, the decision maker satisfaction degree increases as the objective value increases.” [39, p. 157]

Beyond the use of soft computing techniques, it is also possible to combine GAs with local search or other metaheuristic methods. As it will be discussed in section five, the evidence gathered here suggests that the addition of local search accelerates convergence to a global optimum solution in a substantial manner.

3.7.2. Genetic algorithms: encoding

Encoding refers to the act of representing a candidate solution in chromosomal format. Although this process is often trivial in continuous problems, it can be challenging in discrete optimization due to the number of permutations that need to be managed. Here, it is often the case that the solutions encoded by the random number generators are illegal, redundant or incapable of reaching all points of the solution space. As a result of these flaws, selection will not function in an adequate manner and the optimization process will be seriously compromised. Moreover, it can be the case that encoding is constructed without considering the crossover operator, a factor that also leads to poor performance. In view of avoiding these pernicious consequences, it is highly recommended to structure the GA using the following properties [39]:

- Legality: the permutations made in a given chromosome should necessarily lead to a solution.

- Completeness: any solution should have a reciprocal chromosomal representation. This proposition is conceived to ensure that any point in the solution space can be reached by genetic search.

- Lamarckian property: this principle relates to the ability of a given chromosome representation to facilitate inheritance. If this is not the case, the desirable traits of the offspring will be passed to the next lineage only by random chance.

- Non-redundancy: the relation between solutions and encoding should be established on a one-to-one basis to impede frivolous calculations related to the generation of identical offspring.

- Causality: small changes on the genotype space caused by mutation should result in equally small changes in the phenotype space.

As explained in this section, encoding should not be taken lightly. Understanding the negative consequences of dismissing these properties, the coming chapter of this dissertation will discuss in depth how the proposed solutions manage to strictly abide to these principles and NFL theorems.

3.7.3. Genetic algorithms: convergence properties

Although many optimization methods based on metaheuristics do not guarantee exact results, some approaches such as genetic algorithms and simulated annealing have proofs of convergence. Even though the evidence is compelling, these theorems make no direct reference to when the optimal solution will be reached. Eiben et al [41] [42] proved using Markov chain analysis that GAs converge to a global optimum with a hundred percent probability if certain conditions are met. The authors specify these requirements exactly:

“It has been proved that an EA optimizing a function over an arbitrary finite space converges to an optimum with a probability 1 under some rather permissive conditions. Simplifying and reformulating the results, it is shown that if in any given population.

1. Every individual has a nonzero probability to be selected as a parent, and
2. Every individual has a nonzero change to be selected as a survivor, and
3. Any solution can created by the action by the action of variation operators with a nonzero probability.

Then the nth generation would certainly contain the global optimum for some n.” [41, p. 198]

Corroborating with Eiben et al [42], Gunter Rudolph [43] agrees that GAs will eventually converge. However, his views on the necessary conditions related to optimality are substantially different from the previous scholars. Summarizing these discrepancies, Rudolph emphatically states that elitism is the key factor regarding the global convergence of GAs:

“It is proved by means of homogeneous finite Markov chain analysis that a CGA will never converge to the global optimum regardless of the initialization, crossover, operator and objective function. But variants of CGAs that always maintain the best solution in the population, either before or after selection, are shown to converge to the global optimum due to the irreducibility property of the underlying original nonconvergent CGA.” [43, p. 1]

Confirming the findings of Rudolf, authors such as Suzuki [44], Agapie [45] and Lozano [46] arrived to similar conclusions. Seeing that global convergence of GAs is widely accepted, the remaining issue about the method involves the relative efficiency of evolutionary algorithms in general. Although there is still some controversy on the subject, the literature indicates a strong consensus that the convergence properties of a GA can be improved if problem-specific knowledge is incorporated.

3.7.4. Genetic algorithms: practical implementation

The goal of this practical implementation is to minimize a function containing four variables and to demonstrate how genetic operators function. For the sake of computational simplicity, the values of a,b,c and d are required to be integers between 0 and 30. The complete problem formulation can be verified next:

illustration not visible in this excerpt

Step 1) Initialization

As commonly happens in this class of heuristics, the initial step of the GA involves creating a set of random solutions. In this example, the initial lineage consists of six chromosomes encoded by the vector [a;b;c;d] that assumes integer values ranging from zero to thirty as a mean to enforce the restrictions present on the problem.

illustration not visible in this excerpt

Step 2) Evaluation

Using the data related to the population depicted above, the next stage of the algorithm encompasses evaluating chromosomal fitness. As will be demonstrated next, this measurement is performed at an individual level by substituting the encoded values into the objective function:

illustration not visible in this excerpt

Step 3) Selection

In evolutionary algorithms, selection refers to the act of assigning probabilities to chromosomes in such a way that the fittest individuals have the greatest chance of producing offspring. In this example, this goal is achieved by: 1) calculating the cumulative probabilities; 2) using the roulette wheel algorithm to determine the individuals that will form the next generation.

Despite the simplicity of the method used here, it is possible to use more sophisticated selection approaches. However, the additional computational effort related to coding and running routines such as ranking or tournaments or replacing the roulette wheel by stochastic universal sampling should be justified according to the structure of the problem. For instance, in the flow shop scheduling problem, the difference between the solutions with the best and worse makespan values is often relatively small. Taking these peculiarities into account, increasing the selection pressure by ranking the solutions exponentially or linearly can be considered advisable under these circumstances. In any case, the selection process related to the practical implementation is depicted below:

Step 3.1) Calculate the inverse of the fitness values and the total

illustration not visible in this excerpt

Step 3.2) Estimate the probability for each chromosome

illustration not visible in this excerpt

Step 3.3) Compute the cumulative probabilities C[1] = 0.1254

illustration not visible in this excerpt

Having calculated the cumulative probabilities, the roulette wheel procedure can begin. As denoted by the coming vector R, this algorithmic routine starts with the generation of six random numbers ranging from zero to one:

R[1] = 0.201, R[2] = 0.284 ,R[3] = 0.099,R[4] = 0.822,R[5] = 0.398, R[6] = 0.501 By placing the random sequence R and the cumulative probabilities P below each other, it is easy to visually match the corresponding vectors and perceive that roulette selection provides the following results: 1) all the members of the population should be included in the next lineage except for chromosome five; 2) given the random vector R, the third individual should be included twice in the mating pool. Apart from this intuitive approach, Hermawanto in the article ‘Genetic algorithm for solving a simple mathematical equality problem’ [47] describes how to perform the roulette wheel procedure in a systematic way by providing an example that demonstrates how to select a chromosome according to the information contained in the random number R[1].

“If random number R [1] is greater than P [1] and smaller than P [2] then select Chromosome [2] as a chromosome in the new population for next generation.” [47, p. 6]

Table 3.7.1: Roulette wheel selection

illustration not visible in this excerpt

illustration not visible in this excerpt

Step 4) Crossover

After determining the individuals that constitute the mating pool, the next phase of the GA encompasses the establishment of a new lineage of offspring by means of recombination of parental chromosomes. Although several different types of genetic operators can be used, it is important for the sake of convergence that the chosen method preferably possesses the Lamarckian property, meaning that recombination should be done in way to ensure that the offspring is able to inherit the best parent’s traits. As regards this example, the one-cut point crossover is sufficient to enforce the desired property. To explain how this genetic operator works in practice, this procedure is performed by choosing a random position in the parent’s chromosomes and then forming the offspring by exchanging this information.

illustration not visible in this excerpt

Figure 3.7.1: One-cut point crossover

Step 4.1) Preliminary steps

Before crossover takes place, it is advisable for the sake of convenience to assign the pairs that will generate the offspring and determine the points in which crossover will occur. These parameters are gathered here according to the following instructions:

1. Using the NewChromosome gene pool as basis, generate two random sequences containing permutations of integers between one and three and four and six.
2. The aforementioned procedure should yield three couples of parents. Seeing that it is possible to generate two children using one point crossover, it is clear that the population size will remain constant.
3. To determine the cut points, it is necessary to generate another random sequence consisting of three integers ranging from one to three. Contrary to the array used to assign couples, the vector containing crossover points can contain repetitions. In any case, information about the couples that will be formed and their respective cutting points is as follows:

Table 3.7.2: Determining cutting points

illustration not visible in this excerpt

Step 4.2) Creating new offspring

With the required parameters in hand, recombination proceeds as shown. The cutting points are denoted in the gray shaded area.

illustration not visible in this excerpt

Figure 3.7.2: Creating new offspring

Step 5) Mutation

As shown by the previous step, crossover operates by combining the parents’ genetic material in order to create new offspring. However, the act of creating new beings does not always occur in a deterministic manner but is subject to eventual errors called mutations.

Although the possible impacts related to this phenomenon are difficult to evaluate, the usefulness or harm related to the changes promoted by mutation can be more easily perceived in microscopic organisms. One obvious example in this realm is the HIV virus. Consisting of only fifteen proteins and RNA, the reproduction of this simple structure is subject to a series of flaws that tend to occur in relatively little time. In some cases, the aforementioned errors can be advantageous to the retrovirus survival for the reason that they can make a particular strain resistant to certain types of substance normally used to inhibit the organism’s replication. In other cases, mutation can be beneficial to the host, as seen in the loss of fitness of the HIV strains resistant to a substance called lamivudine.

Aside from the potential benefits or disadvantages, the process of mutation in this example requires three additional parameters: 1) mutation rate, 2) loci in which the mutation operator will take place, 3) value of the new chromosomes. Here, calculating mutation rate is far from complicated. Setting this parameter to 10% and verifying that the total number of genes present in the population is twenty-four, it is obvious that two genes from the pool will be altered. All that is necessary to obtain this result is to multiply the aforementioned figure and round down the result to the nearest integer. To define the loci in which mutation will take place, it is necessary to generate two random numbers from one to the total quantity of genes. Having assigned the adequate locations, that in this case refers to positions three and nineteen, the next step is to create two random integers ranging from zero to thirty and replace the corresponding gene using this new value. This process is showed in the figure below:

illustration not visible in this excerpt

Figure 3.7.3: Mutation

Step 6) Partial result

The last step of this practical implementation involves updating the fitness values and verifying if the result is converging to the desired minimal. Substituting the data from the last table in the objective function, the following results are obtained:

illustration not visible in this excerpt

As can be easily verified, the current best individual refers to the chromosome that encodes the solution [10,5,3,1] with the fitness value of six. Since the best previous fitness value for this minimization problem was ninety-two, it is possible to verify at this early stage that the algorithm shows signs of convergence.

3.8. NEH heuristic

Developed by Nawaz, Enscore and Ham in 1983 [48], the homonymous NEH algorithm is probably the most relevant deterministic heuristic used to solve permutation flow shop problems.

Basically, the method consists of two stages. The first involves sorting the jobs according to the sum of their processing times in the machines. The second encompasses the construction of partial schedules by consecutively placing jobs in different positions so that the overall makespan is minimized. Even though this heuristic is known to perform well, it possesses some weaknesses related to tie management and the computation of redundant sequences. Due to these minor but significant flaws, several improvements have been proposed, the most important being the NEH- T version created by the French computer scientist Taillard, which added dynamic programing to the original algorithm to reduce the running time from the original 0 (n[3] x m) to 0(n[2] x m). Despite the significant reduction in complexity, Taillard’s heuristic [49] is not as precise as its predecessor. Evidencing the last statement, while the original version of the algorithm is capable of providing results close to 3% of the expected makespan, empirical results show that Taillard’s version is on average 10% away from optimality. Jena, Poggi de Aragäo and Sotelo da Silva reaffirm the superior performance of heuristic developed by Nawaz, Enscore and Ham in the following passage:

“Due to its exceptional results in fairly short running time, the NEH is up-to-date the most discussed and analyzed PFSP heuristics. Comprehensive computational experiments given by Ruiz and Maroto (2005) argue that, considering its low computation costs, the NEH can still be considered the best heuristic among all deterministic ones.” [50, p. 3]

Seeing that NEH continues to be the best-performing algorithm as regards obtaining a solution to the permutation flow shop problem using heuristics, the next section will demonstrate how to implement this approach in practice.

3.8.1. NEH algorithm: practical implementation

Although the NEH algorithm can also be used to optimize date-related objectives such as tardiness, this example will strictly deal with makespan minimization. Taking this goal into account, the table below displays the number of jobs, machine and processing times [48]:

Table 3.8.1: NEH algorithm problem data

illustration not visible in this excerpt

As can be easily verified, the manufacturing environment depicted in this practical implementation encompasses four jobs and five machines. From a computational point of view, the aforementioned instance results in the evaluation of sequences, a figure that represents an obvious advantage over traditional exact methods. Nonetheless, the first step in the direction of obtaining these partial sequences is to sum the processing times related to jobs (¡1323334), denoted (t1,t2,t3,t4) herein. The execution of this particular step of the NEH algorithm yields the following results:

illustration not visible in this excerpt

Having concluded the summation, the next stage involves sorting the results that were just obtained in descending order:

illustration not visible in this excerpt

After completing the sorting procedure, it is necessary to select the two jobs with the highest sum of the processing times and calculate the makespan related to these partial sequences. Since the jobs that encompass the highest processing times involve jobs jj all that is required is to swap them and obtain the partial sets jj and j3,j1.

- Makespan for partial sequence (]1,j3): 460
- Makespan for partial sequence (]3,j1): 420

According to the calculations made above, it is easy to perceive that the minimum makespan refers to the partial sequence (js,j 1). To avoid complete enumeration, the NEH algorithm imposes that the abovementioned sequence is kept fixed, implying that job j2 must be inserted either 1) before j3, 3) right after j3, 3) or after j1. These possibilities are depicted in the diagram below:

illustration not visible in this excerpt

Figure 3.8.1: Insertion of job two

Having defined how job j2 can be placed, the NEH heuristic requires once more the calculations of the makespan values related to each feasible partial sequence:

- Makespan for partial sequence (j2,j3,j 1): 510
- Makespan for partial sequence (j3,j2,j 1): 510
- Makespan for partial sequence (j3,j1j2): 500

Seeing that j4 has not been scheduled, the next step involves repeating the process of inserting the remaining job in the sequence containing the minimum makespan. Here, j4 can assume the following positions:

illustration not visible in this excerpt

Figure 3.8.2: Insertion of job four

After performing the due makespan calculations, the following results are obtained:

- Makespan for partial sequence (j4,j3j1j2): 540
- Makespan for partial sequence (j3,j1j4j2): 580
- Makespan for partial sequence (j3,j4j1j2): 570
- Makespan for partial sequence (j3,j1j2j4): 580

illustration not visible in this excerpt

Figure 3.8.3: NEH final result

As denoted by the last Gantt diagram, the minimum makespan obtained by the NEH algorithm is 540 time units. Due to the heuristic nature of the method, this value does not necessarily refer to the exact solution but is situated within 3% of the global optimum. Although this level of precision and complexity can be considered impressive, it is important to recall that the utility of method is limited to permutations schedules. On the contrary of metaheuristics, heuristic methods such as NEH cannot be used in a wide range of problems. This factor highlights the need to study optimization frameworks that allow greater generalization.

4. PROPOSED SOLUTIONS

The goal of this dissertation is to develop methods capable of mitigating the computational burden associated with the permutation flow shop scheduling problem. Taking this objective into account, three different candidate methods were developed:

- A genetic algorithm, from now on denominated ΛGA.
- Two hybrid genetic algorithms, from now on called AHGA1 and AHGA2.

Table 4.1: Proposed solutions

illustration not visible in this excerpt

Having described the overall mechanisms of the proposed algorithms, the next step involves explaining the exact manner in which they were implemented. Here, the main idea is to evidence the adjustments that were made for the sake of performance improvement. In this realm, two basic categories should be mentioned. The first encompasses adapting genetic operators, problem representation and objective function to the characteristics of flow shop scheduling. The second assumes the form of local search.

In view of the need to match the structure of the problem at hand to the proposed solution methods, the discussion starts with a list provided below. It attempts to summarize the abovementioned strategies, its underlying reasons and some brief technical issues.

Adjustment strategies related to genetic operators, problem representation and objective function:

- Seeing that the number of jobs present on the instance is seven and that the complexity of the problem is 0(n!), the number of feasible schedules encompasses 7! (5040) sequences.
- Given the data above, each feasible schedule consists of unique permutations containing seven digits such as [1,2,3,4,5,6,7],[7,6,5,4,3,2,1].
- Considering the characteristics of the feasible solutions, the proposed algorithms encoded the chromosomes in terms of explicit job permutations. In practice, this task was accomplishing using a Matlab function called randperm.
- To explain the advantages of representing the problem using explicit permutations, it must be acknowledged that the set of feasible candidates is substantially smaller than the one that also includes unfeasible solutions, implying a substantial effort to eliminate or assign penalties to them. Taking this computational burden into account, it is clear that the knowledge about the structure of feasible solutions must be incorporated into the proposed ΛGA and AHGA. For a more detailed explanation on this subject, please refer to the next section.
- In addition to this, the feasibility of the chromosomes was maintained at all times in the proposed algorithms by using order one crossover and pairwise exchange mutation operator.
- Generally speaking, flow shop scheduling problems tend to be characterized by the presence of multiple equivalent solutions. Moreover, the difference between the optimal makespan and a mediocre candidate is often small, implying that mechanisms used to increase the reproductive odds of superior parents must be addressed. Due to these factors, the GA used linear ranking selection as a means to increase the inclusion of fitter genes into the mating pool.
- The constraints related to the machines were enforced by adopting discrete event simulation. In addition, this technique also served as a way to calculate the value of the objective function.

Adjustment strategies related to local search:

- Besides the abovementioned strategies, it is also possible to incorporate problem- specific knowledge into a GA by including local search. In the proposed solutions, this approach was applied only to the best chromosome for the sake of maintaining the diversity of the population.
- Although the actual benefit of adopting this particular procedure can be only evaluated in an adequate manner after the execution of the GA, the presence of multiple equivalent solutions suggests that is useful to augment the efficiency of the process of searching for an improved schedule by introducing different types of local search [51] [52]. The techniques used to search the vicinity of the best solution provided by the GA are depicted below.

1) Insertion neighborhood

illustration not visible in this excerpt

Figure 4.1: Insertion neighborhood

2) Interchange moves

illustration not visible in this excerpt

Figure 4.2: Interchange move

Technical issues

1. To avoid errors concerning memory allocation in Matlab due to arrays and matrices that have no pre-defined size, the number of jobs present in the instance was reduced to seven.
2. The number of machines was reduced to two in order to: a) verify the results in polynomial time, b) ensure that the algorithm is converging to the global optimum, and c) guarantee that there are absolutely no mistakes related to makespan evaluation.
3. Even though it would be pertinent for the sake of performance to add the NEH algorithm to initialize the proposed solutions, this approach was avoided to facilitate convergence analysis.

4.1. Flow shop scheduling: limits of integer linear programming formulations

Typically, flow shop scheduling is formulated using integer linear programming, sets of recursive equations or by simulating a determined manufacturing environment. In spite of the diversity of possibilities, the last approach was selected to represent the proposed GA and ΛHGAs due to the ease of implementation and favorable properties. As previously mentioned, using discrete event simulations means that it is not necessary to increase the number of integer variables according to the complexity of the manufacturing environment, as it would occur with integer linear programming. On the contrary, adopting the representation approach proposed here only increases the size of the problem in respect to the number of jobs.

Moreover, integer linear programming searches the area bounded by the relaxation, implying that unfeasible candidates will likely be evaluated. In contrast, using genetic algorithms does not require investigation of any unfeasible candidate as long as certain conditions contemplated in the proposed solutions hold: 1) the genetic operators are properly adjusted to the characteristics of the feasible solution, 2) the constraints related to the machines are enforced and, 3) the solution vector is explicitly denoted in terms of unique permutations. Tables 4.2.1 and 4.2.2 use a three job example to evidence the amount of frivolous calculations required to search a space wider than the domain of feasible permutation of jobs.

Table 4.2.1: Feasible sequences

illustration not visible in this excerpt

Table 4.2.2: Unfeasible/illegal sequences

illustration not visible in this excerpt

About the requirement of using adequate genetic operators, Tables 4.2.3 and 4.2.4 exemplify the difficulties in obtaining feasible offspring with inadequate recombination procedures, and Table 4.2.5 denotes ideal Lamarckian properties related to the presence of both parent solutions that consist of unique permutations and an efficient crossover (order one) operator:

Table 4.2.3: One point crossover with illegal parents

illustration not visible in this excerpt

Table 4.2.4: One point crossover with feasible parents

illustration not visible in this excerpt

Table 4.2.5: Order one crossover with feasible parents

illustration not visible in this excerpt

Reinforcing the idea that integer linear programming searches a space that is wider than the feasible permutation of jobs, a sample formulation of the flow shop scheduling problem proposed by Sawik [53] will be included. However, before this brief analysis takes place, it is important to note that the author uses a slightly different terminology. Here, machines are called stages and jobs refer to parts. Besides this issue of terms, this expanded example presents every constraint individually instead of their compact form for the sake of clarification.

Indexes:

illustration not visible in this excerpt

Input parameters

illustration not visible in this excerpt

Decision variables:

illustration not visible in this excerpt

Sample problem:

illustration not visible in this excerpt

Objective function

illustration not visible in this excerpt

Constraints:

1. Part completion constraints· each part must be processed in the first machine and successively on all downstream machines

illustration not visible in this excerpt

2. Part noninterference constraints: no two parts cannot be processed simultaneously on the same machine

illustration not visible in this excerpt

3. Maximum completion time constraints: the schedule length is determined by the latest completion time of some part in the last machine

illustration not visible in this excerpt

4.2. Flow shop scheduling: crossover and mutation operators

Having previously discussed the strategies of knowledge incorporation adopted in the proposed solutions, the focus of this section will shift to the following aspects: 1) demonstrate how the genetic operators used in the proposed ΛGA and AHGAs were implemented in practice, and 2) prove that the aforementioned procedures cannot generate unfeasible offspring granted that the input data consists of unique permutations.

4.2.1.Order one crossover

Even though there are other equally effective methods in terms of generating feasible offspring, order one crossover [54] was chosen due to the good performance and simplicity of implementation. An overview of the algorithm and the corresponding Matlab code can be seem below:

Order one crossover procedure Notation

- P1 and P2= vector representing parent one and two genetic information
- r1 and r2=denote the portion of the genes from parent P1 that will be inherited by P2
- C=number of genes present in P1 and P2
- O1 and O2= offspring one and two

1. Choose two random integers n and r2 such that:

illustration not visible in this excerpt

2. With π and r2 in hand, use this information to select the corresponding portion of the parent one chromosome and highlight it.

r1=6, r2=2

illustration not visible in this excerpt

Figure 4.3.1: Crossover step one and two

3. Compare the highlighted genes from parent one with parent two and eliminate the redundant information from the last individual.

illustration not visible in this excerpt

Figure 4.3.2: Crossover step three

4. Generate O1 by directly copying the genes that encompass the segment r1 to r2 in P1 as follows:

illustration not visible in this excerpt

Figure 4.3.3: Crossover step four

5. Insert the remaining genes from P2 into the empty slots of O1. If P1 and P2 consist of unique permutations, O1 must be feasible.

illustration not visible in this excerpt

Figure 4.3.4: Crossover step five

6. Inverse the vectors that encode the genes P1 and P2 and repeat steps one to five to obtain a second child.

illustration not visible in this excerpt

Figure 4.3.5: Crossover step six

Code 4.1.: Order one crossover -Matlab implementation

illustration not visible in this excerpt

Developing the idea that the genetic operators used here do not generate unfeasible chromosomes, the next section will demonstrate the practical implementation of the mutation operator.

4.2.2. Mutation

Similarly to crossover, the mutation operator chosen [41] for the proposed ΛGA and AHGAs will not create unfeasible sequences granted that the input consist unique permutations. The figure and table below illustrates exactly how this procedure works:

- Given the sequence of jobs, exchange two random positions to obtain a new candidate.

illustration not visible in this excerpt

Figure 4.4: Pairwise exchange mutation

Code 4.2: Mutation - Matlab implementation

illustration not visible in this excerpt

Apart from crossover and mutation, selection also plays an important role in the domain of matching the structure of a problem at hand to the GA. The next section deals exactly with this topic.

4.2.3.Selection

Selection [41] involves determining the individuals that are going to produce offspring. In many cases, the act of designating candidate solutions that will make part of the mating pool can be simply achieved by using raw fitness values as a criteria as shown in the previous example. However, this type of approach can lead to undesired events such as premature converge or stagnation. To circumvent the negative consequences related to choosing a pool of chromosomes strictly on the basis of their fitness values, fitness proportional selection is typically replaced by ranking selection.

Despite its improved robustness, ranking selection can be further refined by mechanisms that allow the level selective of pressure to be adjusted, a strategy that can be used to increase the reproductive chances of the most suitable solutions. The graphic below displays the effects related to applying different levels of selective pressure using linear ranking:

Recalling what was previously affirmed, solutions for machine scheduling problems often contain multiple equivalent solutions. Acknowledging these difficulties, the ranking method that was incorporated into the ΛGA and AHGAs addresses the issue of ties. The complete details of this procedure can be verified below:

Adapted ranking selection Parameters

- SP=selective pressure
- Nind= number of individuals
- Pos=position assigned to the individual (least fit=1)

1. Evaluate the makespan for every individual and rank the solutions so that the fittest individuals are assigned a highest score. In addition, assign the same level of importance for sequences that have the same makespan.

Table 4.3.1: Linear ranking step one

illustration not visible in this excerpt

(*)Fittest individuals are the ones that have the lower makespan

2. Apply linear transformation using the formula (4.8) and one point one as the selective pressure value:

illustration not visible in this excerpt

Table 4.3.2: Linear ranking step two

illustration not visible in this excerpt

3. Calculate the cumulative probabilities and perform roulette selection using the standard procedure.

4.2.4. No free lunch theorems for optimization and proposed solutions

Returning to the essential idea of this section, the main concern regarding the design of the proposed GAs was to improve their performance by incorporating problem-specific knowledge. In view of all the efforts that were made so far, it is worth to affirm that this design approach is theoretically based on circumventing the negative repercussions related to the NFL theorems. Trying to explain this concept in a simple manner, the aforementioned postulates affirm that a determined search algorithm fares no better than random search across the set of all problems. Understanding that perspective is not particularly encouraging, the only way to improve the performance of a determined optimization algorithm is to adapt it to the structure of the problem, an effort that can be only achieved by incorporating knowledge. James Montgomery explains the subject in more detail as follows:

“The fundamental implication of the first two NFL theorems is that if an algorithm performs well on one class of problems then it must do poorly on other problems. Moreover, an algorithm that performs well on one class of problems must perform worse than random search on all remaining problems. Two immediate conclusions can be drawn from this. First, running an algorithm on a small number of problems with a small range of parameter settings may not be a good predictor of that algorithm’s performance on other problems, especially problems of a different type. Second, the result of Theorem 1 holds as long as knowledge of a problem’s structure is not incorporated into a search algorithm. In other words, incorporating problem-specific information into an algorithm will improve its match to that problem and hence, its performance.” [55, p. 3]

In addition to the remarks made by Montgormery about the first NFL theorem, Krasnagor, Aragón and Pacheco explicitly mention the ways in which generic evolutionary strategies can be adjusted according to the characteristics of a determined instance. Moreover, the authors also stress the fact that exploitation of problem-specific knowledge is intrinsic to hybrid genetic algorithms:

“However, the reader must note that it is well established that generally good black­box optimisers that excel at solving any problem do not exist. This is why successful EAs are usually found in “hybridized form”. Thus, a “conservation of competence” principle applies: the better one algorithm is solving one specific instance (class) the worst it is solving a different instance (class) (Wolpert and Macready 1997). That is, it cannot be expected that a black-box metaheuristic will suit all problem classes and instances all the time, because, it is theoretically impossible to have both ready made of the-shelf general and good solvers. Memetic algorithms are popular because they are good a algorithmic template that aid in the balancing act needed to solve challenging problems by successfully re-using a general, of-the-shelf, solver (e.g. EAs), which can readily incorporate domain specific features in the form of local searchers. Virtually everybody agrees that for an evolutionary algorithm to succeed in a real-world scenario it must contain sufficient domain knowledge, which is built-in not only in the representation, fitness function and genetic operators used but also in the local search(ers) employed” [56, p. 3]

Understanding that the proposed solutions are grounded on solid theoretical foundation, the next part of this work will proceed with analysis of the results. In this realm, the goal is to verify whether the particular choices made in the design of the proposed solutions are able to empirically mitigate the computational burden associated with the flow shop scheduling problem.

5. ANALYSIS OF RESULTS AND FUTURE RESEARCH DIRECTIONS

As the title implies, the main goal of this chapter is to present and analyze the performance of the proposed solutions. Providing an overview about the subject, the evidence gathered in this dissertation shows that the hybrid versions were especially suitable for reducing the computational burden associated with the flow shop scheduling problem. As discussed before, and consonant to the theory about the matter, this advantage can be credited to the addition of local search, a procedure that intrinsically operates in the direction of incorporating specific-problem knowledge. Apart from reporting and interpreting the results, this chapter also debates improvements to the existing framework. Having provided a brief introduction about the structure of this chapter, the following section will proceed with the quantitative analysis of the proposed solutions.

5.1.Quantitative analysis

Due to the stochastic components present in evolutionary algorithms, it may be a challenging task to evaluate their performance with fairness. Using GAs as a reference the optimal result can be reached almost immediately or in several generations. Acknowledging these difficulties, the initial methodology was to create three distinct samples, each referring to the due proposed solution, consisting of thirty trials each. Besides issues concerning sampling techniques, an important aspect related to performance involves the criterion used to measure the efficiency of a determined algorithm. In view of the diversity of hardware, coding techniques and computer languages, the approach used to estimate the success of the proposed solution methods involved measuring the number of evaluations required to find the exact solution, instead of elapsed time.

Surpassing the initial expectations, the proposed algorithms converged to the exact global optimum in all trials. However, the results demonstrated that the hybrid versions outperformed the proposed ΛGA by a substantial margin. The table below compares these figures:

Table 5.1: Summarized results

illustration not visible in this excerpt

Even though the results obtained can be described as encouraging, the descriptive information displayed above is insufficient for making accurate inferences about the efficiency of the proposed algorithms. Due to the seriousness of this limitation, a nonparametric estimator called Kaplan­Meier [57] was used to analyze the study data with the deserved rigor. For the sake of making the reader familiar with the subject, studies that employ the method are widely used in an area of statistics called survival analysis [58] [59] [60] .

One important characteristic concerning the Kaplan-Meier estimator is that the sampling admits censoring. Usually, this circumstance occurs either when an individual abandons a determined study or when the event of interest has already happened before enrollment. In addition to censoring issues, the efficacy of Kaplan-Meier estimators requires that certain criteria are met, namely [59]: 1) independence of the sample, 2) independence of censoring, 3) uniformity in the time interval used to verify the status of the individuals, and 4) reasonable number of censored values.

Even though these assumptions pose obvious challenges in several empirical studies, they are unlikely to be violated while evaluating the proposed algorithms, because computer simulations establish a controlled environment. Here, the presence of aspects that complicate the initiative of applying the Kaplan-Meier method is either unimportant or completely manageable. To evidence the validity of the statement above, assessing the performance of the proposed algorithms is unlikely to involve circumstances such as subjects that abandon the study, implicit factors that may hinder the independence of the sample, inconsistent data collection intervals or practical constraints related to the duration of the study. On the contrary, the sample gathered herein is characterized by: 1) uncensored data, 2) trials consisting of identical instances, 3) rigid data collection periodicity, and 4) absence of limits as regards following up the individuals until the event of interest occurs.

Understanding that there are no plausible reasons to believe that the due assumptions will not hold, it is relevant to mention that the use of the Kaplan-Meier estimator is particularly suitable for the analysis performed here, since it lacks the requirement of supposing a distribution. This characteristic is relevant because the histograms created using data related to convergence do not assume any standard form. Having demonstrated the pertinence of the method, the next step involves the actual analysis of the data. Pursuant to this goal, the first information that will be analyzed relates to a detailed description of the sampling methodology followed by a brief performance chart. As the table below clearly denotes, the initial comparison consists of two uncensored samples:

Table 5.2: Sample description

illustration not visible in this excerpt

Table 5.3: Mean and median survival times

illustration not visible in this excerpt

Notwithstanding the clear performance advantage of the hybrid version (AHGA1), table 5.3 shows that there is no overlap between the confidence interval values, which indicates that the differences observed may be statistically significant. Providing further evidence that this may be the case, the plot concerning survival function curves appear to be separated except in the very early moments. Complementing what was said about the visual inspection, it is important to mention that the time units displayed in the chart refer to generations and that these encompass multiples of fifty iterations.

Confirming the thesis that the difference observed in the performance of the AGA and AHGA1 is indeed significant, the statistical tests performed reject the null hypothesis that the survival curves are identical at any conventional level. These results can be verified below:

Table 5.4: Statistical significance

illustration not visible in this excerpt

To conclude the first part of the analysis, it should be clear by now that the AHGA1 algorithm is significantly superior to the proposed AGA, in both practical and statistical terms. Having discussed these results, the second phase of the analysis involves comparing AHGA1 and AHGA2. Recalling what was previously stated, the distinction between these hybrid optimization strategies refers to the timing and the criteria used to terminate the local search procedure. In

ΛHGA1, this event occurs after a good solution is constructed by the AGA algorithm, whereas AHGA2 alternates both procedures using the first improvement criteria.

As the tables below denote, the design of the analysis is exactly the same as in the previous case. However, the conclusions are quite the opposite, in view that the differences between the survival curves are far from statistical significance. As a result of the failure to reject the null hypothesis, it can be stated with confidence that timing does not play any role in improving the convergence properties of the algorithms under analysis, at least in the instance that was observed. Moreover, the presence of overlapping curves and confidence interval values denotes the consistency of the estimates.

Table 5.5: Sample characteristics

illustration not visible in this excerpt

Table 5.6: Mean and median survival times

illustration not visible in this excerpt

Table 5.7: Pairwise comparisons

illustration not visible in this excerpt

illustration not visible in this excerpt

Figure 5.2: Kaplan Meyer survival curves - AHGA1 versus AHGA2 (Treatment groups two and three)

To make an additional remark about the Kaplan-Meyer estimator, it is worth mentioning that the abovementioned statistical significance tests have very similar null hypotheses, except that they tend to emphasize different moments of the survival curve. For instance, while the Mantel-Cox test places greater importance on events that happen later in time, Wilcoxon tends to value occurrences that appear earlier, and Tarone-Ware contemplates the intermediate position.

Finally, it is important to state that the same set of statistics performed above was also applied to the ΛGA and AHGA2 samples. As implied, the results obtained were equivalent. This means the null hypothesis was rejected for all tests at any conventional level.

5.2. Qualitative analysis

As the quantitative analysis demonstrated, the proposed algorithms were able to provide solutions that were equal to the optimal one in every single trial performed. This finding should not be considered particularly surprising due to the elitism of the methods under analysis, the modest size of the instance and the efforts used to match the characteristics of the proposed algorithms to the problem. Moreover, the data also showed that the hybrid versions (ΛHGA1 and ΛHGA2), were able to lessen the burden of exhaustive search by around 94%. In spite of the encouraging results, it is important to notice that the last conclusion cannot be necessarily extrapolated to other instances, due to insufficient quantitative evidence.

On the other hand, it is important to state that the attributes of the instance used herein were deliberately chosen to make the optimization process as difficult as the benchmark cases available in the literature. The only substantial alteration refers to the reduced size of the instance. Recalling what was previously affirmed, problems that contain multiple equivalent solutions tend to be hard to solve, and these are exactly the characteristics of the particular problem that was used to generate the data. In this direction, previous fitness values calculations showed that, from an initial population consisting of twenty-five chromosomes, only ten could provide unique makespan values, not to mention that the relative difference between the best current candidate and the worse was only 15.30%.

Another issue that must be further explained is the significant performance disparities observed between the proposed GA and its hybrid counterparts. As can be deduced from the sampling technique used, this discrepancy cannot be credited to specific characteristics of the instance, because they are identical. Taking this factor into account, the only distinctive feature related to improved performance refers to the inclusion of local search. Here, the data obtained so far indicated that knowledge can be grasped as a key explanatory factor as regards the refinement of the proposed solution methods. The following quote from Cotta and Moscatto reinforces this conclusion:

“The most crucial and distinctive feature of MAs, the inclusion of problem knowledge mentioned above, is also supported by strong theoretical results. As Hart and Belew initially stated and Wolpert and Macready later popularized in the so-called no free lunch theorem, a search algorithm strictly performs in accordance with the amount and quality of the problem knowledge they incorporate. This fact clearly underpins the exploitation of problem knowledge intrinsic to MAs. Given that the term hybridization is commonly used to denote the process of incorporating problem knowledge, it is not surprising that MAs are sometimes called ‘hybrid evolutionary algorithms’ as well.” [61, p. 106]

Besides the use of local search, eliminating the evaluation of unfeasible solutions by explicitly denoting permutations played a good part in improving the performance of the algorithms. Clearly, there is no justification for examining unfeasible solutions when the method can be adjusted in a way to exclude them from the input of the problem.

Having emphasized some of the reasons that underpin the superiority of the hybrid genetic algorithms approach, the next part of this work will discuss possible ways to improve the results obtained so far.

5.3. Future research directions

As mentioned before, increasing the scale of the problem necessarily involves rewriting the code so that memory allocation issues cannot occur. From a pragmatic standpoint, this goal is achieved by creating a series of dummy zero filled matrices. Moreover, developing the code in Matlab may not be the best option, because of the cost and slow performance. Understanding the need to rewrite the proposed algorithms, three alternatives were considered: Fortran 90 [62], C++ [63] with the Eigen library or Julia. Given the comparable performance to C++, expressive syntax, stability, integration with other languages and tools that simplify parallel computing, Julia [64] was selected. This updated code can be verified in the Appendix for reference and reproduction of results.

In addition, the evidence gathered so far suggests that the hybrid versions of the proposed algorithms managed to substantially reduce the onus associated with the factorial complexity of the flow shop scheduling problem, meaning that expanding the framework in the direction of intensifying local search seems to be a plausible method of improvement. One simple but effective way to fulfill the aforementioned goal is to add an algorithm called iterated greedy to the current GA framework. The following pseudocode explains exactly how this approach works:

illustration not visible in this excerpt

Figure 5.3: Iterated greedy algorithm for flow shop scheduling

As can be inferred from the data above, iterated greedy closely resembles ILS. The difference between these two approaches is that instead of applying a random perturbation, the iterated greedy algorithm operates by removing a certain number of components and reinserting them later with a greedy heuristic. In the method to solve the flow shop scheduling problem proposed by Ruiz and Stützle [65], this process of destruction and construction is performed by selecting a random number of jobs and inserting the remaining elements of the sequence using the NEH heuristic. Even though the results are still preliminary, brief tests that were made using ΛGA along with an iterated greedy algorithm, indicated that this combination was able to achieve the upper bound for two of the 20 jobs, 5 machines problems from the Taillard benchmark in no more than 50 seconds. However, this result can be further improved by making this algorithm closer to the one proposed by Ruiz and Stützle [65] by: 1) using the more efficient NEH-T instead of NEH, 2) employing first improvement instead of best improvement, 3) addressing the issues of ties and 3) changing the acceptance criteria from deterministic to one that resembles simulated annealing. Understanding that the initial code has already been written and is functional, it will also be included in the Appendix. Finally, it is important to say that the hybridization proposed here is justified by the fact that the original iterated greedy algorithm performance degrades as the size of the instances increases. This factor suggests that including an approach with a proof of convergence such as GA makes sense in the given circumstances.

6. CONCLUSION

Understanding flow shop scheduling is proven to be mathematically intractable in the vast majority of cases, the goal of this dissertation was to develop methods capable of mitigating the computational burden associated with the problem.

In this realm, three solutions were proposed. The first approach refers to a genetic algorithm that employed discrete event simulation and customized genetic operators as a means to eliminate the evaluation of unfeasible solutions and incorporate problem-specific knowledge. The second and third proposed solutions consist of hybrid methods that have improved the aforementioned framework by including local search.

Computational experiments that used the Kaplan-Meier estimator to evaluate the performance of the algorithms demonstrated that the hybrid versions were able, at a worst-case scenario, to achieve exact results by investigating no more than six percent of the total number of feasible schedules. Granted that the evidence gathered so far suggests that the hybrid versions of the proposed algorithms managed to substantially reduce the onus associated with the factorial complexity of the flow shop scheduling problem, expanding the framework in the direction of intensifying local search seems to be a plausible method of improvement. In this realm, preliminary studies have demonstrated that a new method that adds the iterated greedy algorithm to the current GA framework is capable of tackling some Taillard's benchmark instances in no more than 50 seconds.

Tootmise planeerimine on paljudel juhtudel matemaatiliselt keerukas ülesanne. Antud magistritöö eesmärk on algoritmide arendamine, mis aitavad vähendada/ületada käsitletava probleemi arvutuslikku keerukust.

Magistritöös on töötatud välja kolm erinevat lahendust. Esimene lahendus pöhineb geneetilise algoritmi operaatorite kohandamisel konkreetse üleande spetsiifikale vastavate diskreetsete sündmuste simuleerimiseks eemärgiga ellimineerida mittelubatavad lahendid ja kiirendada algoritmi tööd. Teine ja kolmas lahendus koosnevad hübriidmeetoditest kus eespooltoodud meetodile on lisatud lokaalne otsing.

Koostatud lahendusi hinnati kasutates numbrilisi eksperimente ja Kaplan - Maieri statistikat. Tulemusena tuvastati, et hübriidmeetoditel pöhinevad lahendused koonduvad halvimal juhul täpseks lahendiks kui kaasatud on vähemalt kuus protsenti lubatud lahenditest.

Väites, et hübriid versioonid pakutud algoritmidest suutsid märgatavalt vähendada peavalu seostatud faktoriaalse komplekssusega matemaatilisel planeerimisel, laiendades metoodikat vöimsama lokaalse otsingu suunas tundub olevat öige tee edasi. Selles vallas algelised uuringud on näidanud, et uus meetod kus on lisaks olevale geneetilisele algoritmile ka itereeritud "ahne" algoritm, on suuteline murdma möningaid Taillard-i seatud ülesandeid vähem kui 50 sekundiga.

7. APPENDIX

Having provided the due theoretical and practical evidence about the logic that underlies the proposed methods, this part of the work explicitly reveals the details of the Julia code. As shown below, the first set of instructions refers to the ΛGA and the others relate to the hybrid variants.

Code 7.1: Julia code for AGA using Taillard instances

illustration not visible in this excerpt

Code 7.2: Julia code for AHGA - local search with insertion neighborhood until best improvement

illustration not visible in this excerpt

Code 7.3: Julia code for ΛHGA – local search with insertion neighborhood until first improvement

illustration not visible in this excerpt

Code 7.4: Julia code for ΛHGA – local search with pairwise interchange until best improvement

illustration not visible in this excerpt

Code 7.5: Julia code for ΛHGA – local search with pairwise interchange until first improvement

illustration not visible in this excerpt

Code 7.6: Julia code for iterated greedy algorithm

illustration not visible in this excerpt

8. BIBLIOGRAPHY

[1] P. Brucker, Scheduling Algorithms, Berlin, Heidelberg, New York: Springer-Verlag Berlin Heidelberg GmbH, 2004.

[2] A. A. Juan, H. R. M. Lourenço, R. Luo and Q. Castella, "Using iterated local search for solving the flow-shop problem: Parallelization, parametrization, and randomization issues," International Transactions in Operational Research, vol. 21, no. 1, p. 103-126, January 2014.

[3] S. S. Rao, Engineering Optimization: Theory and Practice, Hoboken: Wiley, 2009.

[4] L. A. Wolsey, Integer Programming, Hoboken: Wiley-Interscience, 1998.

[5] M. Conforti, G. Cornuejols and G. Zambelli, Integer Programming (Graduate Texts in Mathematics), New York: Springer, 2014.

[6] G. Sierskma, Linear and Integer Programming: Theory and Practice, Boca Raton: CRC, 1996.

[7] V. T. Paschos, Applications of Combinatorial Optimization, Hoboken: Wiley-ISTE, 2014.

[8] J. W. Chinneck, "http://www.sce.carleton.ca," [Online]. Available: http://www.sce.carleton.ca/faculty/chinneck/po.html. [Accessed 18 7 2015].

[9] D. Wolpert and W. Macready, "No free lunch theorems for optimization," Evolutionary Computation, IEEE Transactions, Volume:1, Issue:1, p. 67, April 1997.

[10] R. Graham, M. Grotschel and L. Lovasz, Handbook of Combinatorics, Amsterdam: North Holland, 2010.

[11] L. Wolsey and G. Nemhauser, Integer and Combinatorial Optimization, Hoboken: Wiley, 1999.

[12] A. Schrijver, Theory of Linear and Integer Programming, Hoboken: Wiley, 1998.

[13] A. Schrijver, Combinatorial Optimization (3 volume, A,B, & C), New York: Springer, 2003.

[14] M. Garey, D. Johnson and R. Sethi, "The complexity of flowshop and jobshop scheduling," Mathematics of Operations Research, vol. 1, no. 2, 1976.

[15] M. Piñedo, Scheduling:Theory,Algorithms and Systems, New York: Springer, 2008.

[16] N. Gupta, "A review of flowshop scheduling research," in Disaggregation:Problems in manufacturing and service organizations, New York, Springer, 1979.

[17] V. Klee and G. Minty, "How good is the simplex algorithm?," in Proceedings of the Third Symposium on Inequalities held at the University of California, Los Angeles, Calif., September 1-9, 1969, Los Angeles, 1972.

[18] M. R. Garey and D. Johnson, Computers and Intractability: A Guide to the Theory of NP-Completeness, New York: W. H. Freeman, 1979.

[19] S. Cook, "The complexity of theorem-proving procedures," in Proceedings of the Third Annual ACM Symposium on Theory of Computing. [S.l.: s.n.], 1971. 151-158 p., New York, 1971.

[20] L. Levin, "Universal search problems," Annals of the History of Computing, vol. 6, no. 4, 1984.

[21] S. Dasgupta, C. Papadimutriou and U. Vazirani, "NP-complete problems," in Algorithms, New York, McGraw-Hill Education, 2006, pp. 233-263.

[22] G. Spoerer, A. Kendall and K. Parkes, "A Survey of NP-Complete Puzzles," ICGA Journal, 2008.

[23] T. Walsh, "http://arxiv.org/," March 2014. [Online]. Available: http://arxiv.org/abs/1403.1911. [Accessed 18 7 2015].

[24] I. Lynce and J. Ouaknine, "http://sat.inesc-id.pt," [Online]. Available: http://sat.inesc- id.pt/~ines/publications/aimath06.pdf. [Accessed 18 7 2015].

[25] J. Stuckman and G.-Q. Zhang, "http://arxiv.org/," 2005. [Online]. Available: http://arxiv.org/pdf/cs/0512049.pdf. [Accessed 18 7 2015].

[26] H. Emmons and G. Varaiktarakis, Flow Shop Scheduling: Theoretical Results, Algorithms and Applications, New York: Springer, 2013.

[27] M. Seda, "Mathematical Models of Flow Shop and JobShop Scheduling Problems," International Journal of Applied Mathematics and Computer Sciences, vol. 4, no. 4, 2007.

[28] E. Johnson, G. Nemhauser and M. Savelsbergh, "Progress in Linear Programming-Based Algorithms for Integer Programming: An Exposition," INFORMS Journal on Computing, vol. 12, no. 1, 2010.

[29] F. Margot, "Symmetry in Integer Linear Programming," in 50 Years of Integer Programming 1958-2008:, Springer, New York, From the Early Years to the State-of-the-Art.

[30] N. Tyagil, R. Varshney and A. Chandramouli, "Six Decades of Flowshop Scheduling Research," International Journal of Scientific & Engineering Research, vol. 4, no. 9, 2013.

[31] G. Raidl, " A Unified View on Hybrid Metaheuristics," vol. 4030, 2006.

[32] C. Der-San, R. G. Batson and Y. Dang, Applied Integer Programming:Modeling and Solution, Hoboken: Wiley, 2010.

[33] G. Srinivasan, Operations Research: Principles and Applications, New Delhi: Prentice-Hall of India, 2010.

[34] J. Mitchel, "Handbook of Applied Optimization," in Branch-and-cut Algorithms for combinatorial optimization problems, Oxford, Oxford Press, 2002.

[35] C. H. Papadimitriou and K. Steiglitz, Combinatorial Optimization: Algorithms and Complexity, Mineola: Dover Publications, 1998.

[36] A. Land and A. Doig, "An automatic method of solving discrete programming problems," Econometrica, vol. 28, no. 3, 1960.

[37] E. Klotz and A. Newman, "Practical Guidelines for Solving Difficult Mixed Integer Linear Programs," 2013. [Online]. Available: http://inside.mines.edu/~anewman/MIP_practice120212.pdf. [Accessed 18 July 2015].

[38] P. C. Pop, Generalized Network Design Problems: Modeling and Optimization, Berlin/Boston: Walter de Gruyter, 2012.

[39] M. Gen and R. Chen, Genetic Algorithms and Engineering Design, Hoboken: Wiley, 1997.

[40] P. Wang, G. Wang and Z. Hu, "Speeding up the search process of genetic algorithm by fuzzy logic," in Proceeding of the 5 th European Congress on Intelligent Techniques and Soft Computing, Aachen, 1997.

[41] A. Eiben and J. Smith, Introduction to evolutionary computing, New York: Springer Science and Business Media, 2003.

[42] A. Eiben, E. Aarts and K. Van Hee, "Global convergence of genetic algorithms: A markov chain analysis," Lecture Notes in Computer Science, vol. 496, 1991.

[43] G. Rudolf, "Convergence Analysis of canonical genetic algorithms," IEEE Transactions on Neural Networks, vol. 5, no. 1, 1994.

[44] J. Suzuki, "A Markov chain analysis on simple genetic algorithms," Systems, Man and Cybernetics, vol. 25, no. 4, 1995.

[45] A. Agapie, "Genetic Algorithms: Minimal Conditions for," in Third European Conference on Artificial Evolution, Nimes, 1998.

[46] J. Lozano, "Genetic algorithms: bridging the convergence gap," Theoretical Computer Science, no. 229, 1999.

[47] D. Hermawanto, "http://arxiv.org," [Online]. Available: http://arxiv.org/ftp/arxiv/papers/1308/1308.4675.pdf. [Accessed 19 7 2015].

[48] M. Nawaz, E. Enscore Jr and I. Ham, "A heuristic algorithm for the m-machine, n-job flow-shop sequencing problem," Omega, vol. 11, no. 1, 1993.

[49] É. D. Taillard, " Some efficient heuristic methods for the flow shop sequencing problem," European Journal of Operational Research, vol. 47, no. 1, 1990.

[50] S. Jena, M. Poggi de Aragäo and D. Sotelo da Siva, "Competitive Deterministic Heuristics for Permutation Flow Shop Scheduling," Monografías em Ciência da Computaçâo, no. 9, 2009.

[51] H. Lourenço, O. Martin and T. Stülzle, "Iterated Local Search," in Handbook of Metaheuristics, Springer,

New York, 2003.

[52] Z. Michalewicz and D. B. Fogel, How to Solve It: Modern Heuristics, Berlin: Springer Science & Business Media, 2004.

[53] T. Sawik, Scheduling in Supply Chains Using Mixed Integer Programming, Hoboken: Wiley, 2011.

[54] Rubicite Interactive, "www.rubicite.com," [Online]. Available: http://www.rubicite.com/Tutorials/GeneticAlgorithms/CrossoverOperators/Order1CrossoverOperator.aspx. [Accessed 15 July 2015].

[55] J. Montgomery, "www.academia.edu," November 2002. [Online]. Available: http://www.academia.edu/2808100/The_No_Free_Lunch_Theorems_for_Optimisation_An_Overview. [Accessed 18 July 2015].

[56] N. Krasnagor, A. Aragón and J. Pacheco, "Memetic Algorithms," in Metaheuristic Procedures for Training Neural Networks, New York, Springer, 2006.

[57] E. Kaplan and P. Meier, "Nonparametric Estimation from Incomplete Observations," Journal of the American Statistical Association,, vol. 53, no. 282, 1958.

[58] D. G. Kleinbaum, Survival Analysis: A Self-Learning Text, Third Edition, Springer: New York, 2011.

[59] J. P. M. Klein, Survival Analysis: Techniques for Censored and Truncated Data, New York: Springer, 2005.

[60] Swiss HIV Cohort Study, "Clinical progression, survival, and immune recovery during antiretroviral therapy in patients with HIV-1 and hepatitis C virus coinfection: the Swiss HIV Cohort Study," The Lancet, vol. 356, no. 9244, 2000.

[61] P. Moscato and C. Cotta, "A Gentle Introduction to Memetic Algorithms," in Handbook of Metaheuristics, New York, Springer US, 2003, pp. 105-144.

[62] J. M. Ortega, An Introduction to Fortran 90 for Scientific Computing, Oxford: Oxford University Press, 1994.

[63] J. Pitt-Francis and J. Whiteley, Guide to Scientific Computing in C++, New York: Springer, 2012.

[64] I. Balbaert, Getting started with Julia Programming Language, Birmingham: Packt Publishing, 2015.

[65] R. Ruiz and T. Stützle, "A simple and effective iterated greedy algorithm for the permutation flowshop scheduling problem," European Journal of Operational Research, vol. 177, no. 3, pp. 2033-2049, 16 March 2007.

Excerpt out of 107 pages

Details

Title
Genetic and Hybrid Algorithm Approaches to Flow Shop Scheduling
College
Tallinn University  (Department of Mechanical and Industrial Engineering - Chair of Production Engineering)
Course
Industrial Engineering and Management
Grade
5
Author
Year
2016
Pages
107
Catalog Number
V376790
ISBN (eBook)
9783668544055
ISBN (Book)
9783668544062
File size
1640 KB
Language
English
Notes
Master Thesis was awarded maximum grade.
Keywords
Genetic Algorithms, Hybrid Genetic Algorithms, Discrete Optimization, Flow Shop Scheduling
Quote paper
Jose Rodrigues (Author), 2016, Genetic and Hybrid Algorithm Approaches to Flow Shop Scheduling, Munich, GRIN Verlag, https://www.grin.com/document/376790

Comments

  • No comments yet.
Look inside the ebook
Title: Genetic and Hybrid Algorithm Approaches to Flow Shop Scheduling



Upload papers

Your term paper / thesis:

- Publication as eBook and book
- High royalties for the sales
- Completely free - with ISBN
- It only takes five minutes
- Every paper finds readers

Publish now - it's free