# Solving Complex Maintenance Planning Optimization Problems Using Stochastic Simulation and Multi-Criteria Fuzzy Decision Making

## Master's Thesis, 2014

MASTER THESIS IN MATHEMATICS/APPLIED
MATHEMATICS
Solving Complex Maintenance Planning Optimization
Problems Using Stochastic Simulation and
Multi-Criteria Fuzzy Decision Making
by
Sahar Tahvili
Masterarbete i Matematik/Till¨
DIVISION OF APPLIED MATHEMATICS
M ¨
ALARDALEN UNIVERSITY
SE-721 23 V ¨
ASTER°
AS, SWEDEN

Master Thesis In Mathematics/Applied Mathematics
Date:
2014-06-03
Project name:
Solving Complex Maintenance Planning Optimization Problems Using Stochas-
tic Simulation and Multi-Criteria Fuzzy Decision Making
Author:
Sahar Tahvili
Comprising:
30 ECTS credits

Dedication
Dedicated to my parents and Razieh Matini who always encouraged me
1

Acknowledgements
This thesis was done with the aid, help and great support of my supervisors
Professor Sergei Silvestrov and Jonas ¨
Osterberg.
Special thanks to my supervisor Dr. Jonas Biteus at Scania who responded
to all of my questions and also the IRIS-project team at Scania CV AB who
cared so much about my work.
September 5, 2014
aster°
as, Sweden
2

Abstract
The main goal of this project is to explore the use of stochastic simulation,
genetic algorithms, fuzzy decision making and other tools for solving complex
maintenance planning optimization problems. We use two different mainte-
nance activities, corrective maintenance and preventive maintenance. Since
the evaluation of specific candidate maintenance policies can take a long time
to execute and the problem of finding the optimal policy is both non-linear
and non-convex, we propose the use of genetic algorithms (GA) for the op-
timization. The main task of the GA is to find the optimal maintenance
policy, which involves: (1) the probability of breakdown, (2) calculation of
the cost of corrective maintenance, (3) calculation of the cost of preventive
maintenance and (4) calculation of ROI (Return On Investment). Another
goal of this project is to create a decision making model for multi-criteria
systems. To find a near-optimal maintenance policy, we need to have an
overview over the health status of the system components. To model the
health of a component we should find all the operational criteria that af-
fect it. We also need to analyze alternative maintenance activities in order
to make the best maintenance decisions. In order to do that, the TOPSIS
method and fuzzy decision making has been used. To evaluate the proposed
methodology, internal combustion engine cooling of a typical Scania truck
was used as a case study.
Keywords: Genetic algorithm, Corrective maintenance, Preventive mainte-
nance, ROI, Multi-criteria decision making, TOPSIS, Fuzzy decision making,
Discrete event simulation, Intelligent agent
3

Sammanfattning
Det huvudsakliga m°
alet med det h¨
ar projektet ¨
ar att utforska anv¨
andan-
det av stokastisk simulering, genetiska algoritmer, fuzzy beslutsst¨
od och an-
dra verktyg f¨
or optimering av komplexa underh°
alls-planerings-problem.Vi
anv¨
ander oss av tv°
a olika underh°
allsaktiviteter, korrektivt underh°
all och
preventivt underh°
all. Eftersom utv¨
arderingen av specifika kandidater for
underh°
allspolicys kan ta l°
ang tid att genomf¨
ora och problemet med att hitta
den optimala policyn ¨
ar b°
art och icke-konvext s°
a f¨
oresl°
ar vi
anv¨
andning av genetiska algoritmer (GA) f¨
or optimeringen. Den viktigaste
or GA ¨
ar att hitta den optimala underh°
allspolicyn, vilket inneb¨
ar:
(1) sannolikheten f¨
or break-down, (2) Ber¨
or korrek-
tivt underh°
all, (3) ber¨
or preventivt underh°
all och (4)
ber¨
akning av ROI (Return On Investment). Ett annat m°
al med projektet ¨
ar
att skapa en beslutsmodell f¨
or multiobjektiv-system. F¨
or att hitta en n¨
ara-
optimal underh°
allspolicy s°
a m°
aste vi ha en ¨
overblick ¨
over h¨
alsotillst°
andet
hos systemkomponenterna. F¨
or att modellera h¨
alsan hos en komponent s°
a
vi beh¨
over hitta alla kriterier som p°
averkar den. Vi m°
aste ocks°
a analysera
alternativa underh°
allsaktiviteter f¨
or att kunna fatta de b¨
asta besluten f¨
or
underh°
allet. F¨
or att g¨
ora det s°
a har TOPSIS-metoden och fuzzy beslutsst¨
od
anv¨
ants. F¨
or att utv¨
ardera den f¨
oreslagna metoden s°
a valdes kylsystemet i
en typisk Scania lastbil f¨
or en fallstudie.
Keywords: Genetisk algoritm, Korrektivt underh°
all, Preventivt underh°
all,
ROI, Multicriteria decision making, TOPSIS, Fuzzy beslutsst¨
od, Diskret
event simulering, Intelligent agent
4

Contents
List of Tables
7
List of Figures
8
1 Introduction
10
1.1
Preventive Maintenance
. . . . . . . . . . . . . . . . . . . . . 11
1.1.1
. . . . . . . . . . . . . . . 11
1.2
Corrective Maintenance . . . . . . . . . . . . . . . . . . . . . . 12
1.2.1
. . . . . . . . . . . . . . 12
1.3
Return on Investment . . . . . . . . . . . . . . . . . . . . . . . 13
1.3.1
Life Cycle Cost . . . . . . . . . . . . . . . . . . . . . . 14
1.3.2
Total Cost Calculation . . . . . . . . . . . . . . . . . . 14
1.4
Replacement Strategies . . . . . . . . . . . . . . . . . . . . . . 16
1.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Component health
19
2.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2
Definitions and Functions
. . . . . . . . . . . . . . . . . . . . 19
2.3
Cooling System . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.4
The Consequences of PM on the Component Health . . . . . . 25
2.5
Dynamic Reliability . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.1
Riccati differential equation . . . . . . . . . . . . . . . 25
2.5.2
Dynamic Reliability Equation . . . . . . . . . . . . . . 27
2.6
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3 Multi Criteria Fuzzy Decision Making (MCFD)
30
3.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2
Fuzzy Set and Membership Function . . . . . . . . . . . . . . 31
3.3
IFS Generalize Fuzzy Sets . . . . . . . . . . . . . . . . . . . . 31
3.4
Fuzzy Implication Operators . . . . . . . . . . . . . . . . . . . 32
3.5
Inclusion Degree Function of IFS . . . . . . . . . . . . . . . . 32
3.6
TOPSIS Method . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.6.1
The Structure of TOPSIS Method . . . . . . . . . . . . 33
5

CONTENTS
3.6.2
TOPSIS Method in Multiple Criteria Fuzzy Decision
Making
. . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.7
Problem Statement
. . . . . . . . . . . . . . . . . . . . . . . 36
3.8
A Numerical Example
. . . . . . . . . . . . . . . . . . . . . . 38
3.9
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4 Optimization Methods
42
4.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2
Classical and Non-classical Optimization Methods . . . . . . . 42
4.3
Global Optimization . . . . . . . . . . . . . . . . . . . . . . . 43
4.4
Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . 45
4.5
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5 Genetic Algorithm
47
5.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2
Structure
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.3
. . . . . . . . . . . . . . . . . . 48
5.4
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6 Maintenance Optimization Model
50
6.1
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
6.2
Simulation Framework Model . . . . . . . . . . . . . . . . . . 50
6.3
The Simulation Algorithm . . . . . . . . . . . . . . . . . . . . 52
6.4
Example Problem Class
. . . . . . . . . . . . . . . . . . . . . 53
6.5
A Numerical Example
. . . . . . . . . . . . . . . . . . . . . . 55
6.6
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 Summary, Conclusions and Future Work
61
7.1
Summary
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.2
Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
7.3
Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
8 Summary of reflection of objectives in the thesis
63
8.1
Objective 1 - Knowledge and Understanding . . . . . . . . . . 63
8.2
Objective 2 -Methodological Knowledge . . . . . . . . . . . . . 64
8.3
Objective 3 - Critically and Systematically Integrate Knowledge 64
8.4
Objective 4 - Independently and Creatively Identify and Carry
. . . . . . . . . . . . . . . . . . . . . . . 64
8.5
Objective 5 - Present and Discuss Conclusions and Knowledge
65
8.6
Objective 7 - Scientific, Social and Ethical Aspects
. . . . . . 65
A Java code - Event Triggers
66
6

CONTENTS
B Java code - Genetic Algorithm
71
C Java code - Simulation Classes
76
D Java code - Example Problem Class
85
7

List of Tables
1.1
Variable descriptions . . . . . . . . . . . . . . . . . . . . . . . 17
2.1
Idler Roller parts description . . . . . . . . . . . . . . . . . . . 23
2.2
The values in this table has been hidden by the request from
Scania . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.1
Binary implication . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2
Decision Making by TOPSIS . . . . . . . . . . . . . . . . . . . 35
3.3
The Criteria . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.4
The alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.5
The inclusion degrees of A
+
1
in M
1
, M
2
and inclusion degrees
of A
-
1
in M
1
, M
2
. . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.6
The inclusion degrees of A
+
2
in M
1
, M
2
and inclusion degrees
of A
+
2
in M
1
, M
2
. . . . . . . . . . . . . . . . . . . . . . . . . . 40
3.7
The inclusion degrees D
+
(M
i
) and D
-
(M
i
) . . . . . . . . . . . 40
6.1
Parameters defining random events. . . . . . . . . . . . . . . . 56
6.2
Parameters defining plannable events. . . . . . . . . . . . . . . 56
6.3
Parameters defining efficiency measures.
. . . . . . . . . . . . 56
6.4
Parameters defining the default event.
. . . . . . . . . . . . . 56
8

List of Figures
1.1
Corrective maintenance function. . . . . . . . . . . . . . . . . 12
1.2
Costs illustration 
. . . . . . . . . . . . . . . . . . . . . . . 15
1.3
An illustration of component age as a function of time with
periodic preventive replacement . . . . . . . . . . . . . . . . . 15
2.1
The bathtub curve hazard function  . . . . . . . . . . . . . 21
2.2
Cooling system parts for chassis types P-, G-, R-, T-series
. . 22
2.3
Idler roller 
. . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1
TOPSIS illustration . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1
Point 'A' shows a global optimization and the other points
indicate a local optimization 
. . . . . . . . . . . . . . . . 44
4.2
An illustration of Evolutionary algorithm
. . . . . . . . . . . 46
5.1
Genetic code of the parents and the offspring before and after
the crossover . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
6.1
The structure of a Decidor . . . . . . . . . . . . . . . . . . . . 51
6.2
Schematic view of our simulation algorithm. . . . . . . . . . . 52
6.3
Illustration of the model . . . . . . . . . . . . . . . . . . . . . 55
6.4
Results from optimization of linear decidors in the example
problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6.5
Convergence of expected profit. . . . . . . . . . . . . . . . . . 58
9

CHAPTER
1
Introduction
One of the most important factors in the operations of many corporations
today is to maximize profit and one important tool to that effect is the
optimization of maintenance activities. The main goals of all maintenance
activities are: 1- wait as long as possible to perform maintenance so that the
amount of useful lifetime of parts that is thrown away is minimized while
avoiding failure, 2- find the optimal maintenance policy consisting of sched-
uled and unscheduled maintenance so that the life cycle cost is minimized
while satisfying system availability and safety requirements .
Maintenance activities is at the largest level divided into two major areas,
corrective maintenance activities (CM) and preventive maintenance activities
(PM). In this work, we define inspection and imperfect maintenance
1
, which
can be used as a maintenance activity in some situations. Each maintenance
policy contains various activities such as `replacement', `minimal' or `main
repair', etc.
A near-optimal maintenance policy can utilize a single activity or a com-
bination of two or more activities that depends on company policies and
procedures. The work reported in this thesis has been conducted at Sca-
nia, a major Swedish automotive industry manufacturer of heavy trucks and
buses. According to Scania's policy, there is no repair strategy for many
components and replacement is an acceptable strategy for both corrective
and preventive maintenance.
The optimization of the maintenance activities is in large affected by their
financial implications for a specific corporation, where given two equivalent
systems (mechanical or otherwise) under similar operations may require two
quite different maintenance policies for two different corporations.
1
is a standard maintenance which reduces failure intensity but does not leave the system
as good as new 
10

CHAPTER 1. INTRODUCTION
Reducing the total cost of ownership for vehicles, maintenance cost analysis
and the resulting decisions becomes the subjects of scrutiny today . A
Maintenance Decision Support System (MDSS) must satisfy an extensive
cost benefit analysis. With accurate information about vehicle health state
process.
In this chapter we define some engineering and economic concepts and also
analyze various strategies.
1.1
Preventive Maintenance
Definition 1. Preventive maintenance corresponds to a type of planned
maintenance that improves remaining useful life for a component by pre-
venting excess depreciation and impairment .
The main goal with PM is avoiding or mitigating a breakdown in the sys-
tem. PM includes tests, measurements, adjustments, cleaning, lubrication,
minimal repairs, main repair and part replacements for avoiding component
failure. PM has a flexible structure and is not limited to the above activities
.
1.1.1
The most important application of using PM is energy optimization. We
summarize some other advantages of Preventive Maintenance as:
· Increasing the efficiency of equipment
· Extending the remaining useful life
· Increasing the system performance
· Increasing customer service because maintenance teams have less un-
planned maintenance and can respond quicker to new problems 
Moreover, PM measures increased overall safety levels and reduce insurance
inventories.
11

CHAPTER 1. INTRODUCTION
1.2
Corrective Maintenance
Definition 2. Corrective maintenance corresponds to a maintenance type
with different subtasks such as identify, isolate, and rectify a failure so that
the failed component can be restored to an operational condition within the
tolerances for in service operations .
In Figure 1.1 we summarize the function of CM:
Figure 1.1: Corrective maintenance function.
1.2.1
With corrective maintenance we improve product quality, increase compo-
Higher investment in diagnostic equipment and training is a disadvantage of
CM .
12

CHAPTER 1. INTRODUCTION
1.3
Return on Investment
Definition 3. Return on investment denotes the benefit derived from having
spent money for a system (or product) development, change and manage-
ment. ROI is also a performance measure used to evaluate the efficiency of
an investment opportunity.
ROI is calculated by:
ROI =
Return - Investment
Investment
(1.1)
The 'Return - Investment' in the denominator is the loss or gain realized by
making the investment .
For maintenance purposes ROI is defined as:
ROI
M
=
C
H
- C
HM
I
HM
(1.2)
where
· C
H
is the life-cycle cost of the system when managed using unscheduled
maintenance
· C
HM
is the life-cycle cost of the system when managed using a health
management (HM) approach
· I
HM
is the investment in HM
In this work C
H
is to the total life cycle costs of a system using corrective
maintenance and C
HM
is the total life cycle costs with preventive mainte-
nance.
If we assume I
HM
= 1, then ROI simplifies to:
ROI
M
= C
H
- C
HM
(1.3)
Our goal is to maximize ROI. To obtain this we want to make C
HM
as small
as possible, of course C
H
is fixed for a particular system.
13

CHAPTER 1. INTRODUCTION
1.3.1
Life Cycle Cost
The life-cycle cost (LCC) can be divided into eight parts,
LCC = C
ini
+ C
ins
+ C
e
+ C
o
+ C
mr
+ C
sd
+ C
env
+ C
dd
,
where
· C
ini
: initial cost (purchase price of the component and all items unique
to that component)
· C
ins
: installation cost (shipping cost, rigging, installation and start up
of the component)
· C
e
: energy cost (predicted energy cost for system operation)
· C
o
: operation cost (labor cost of normal system supervision)
· C
mr
: maintenance and repair cost (include both routine and predicted
repairs)
· C
sd
: downtime cost (loss of production cost)
· C
env
: environmental cost
· C
dd
is the decommissioning and disposal cost (include disposal of com-
ponent, associated equipment and site restoration)
1.3.2
Total Cost Calculation
We calculate total maintenance cost G at time t by
G(t) = K + C(t) ­ R(t)
(1.4)
where K is the replacement cost, C(t) is the maintenance cost and R(t) is
rescue cost .
A preventive replacement applicability is an appropriate decision if and only
if a component has negligible failure rate and if the preventive replacement
cost is cheaper than the corrective maintenance cost.
14

CHAPTER 1. INTRODUCTION
In Fig. 1.2, we illustrate every maintenance cost per time.
Figure 1.2: Costs illustration 
The maintenance cost indicates preventive maintenance which increases with
time, that is, more PM. The breakdown cost relates to corrective maintenance
which decreases with time because of the increased PM. As Figure 1.2 shows
there is an optimum point when preventive maintenance should be performed
when maintenance cost and breakdown cost are equal, which is point A in
this case .
Figure 1.3: An illustration of component age as a function of time with periodic preventive replacement
15

CHAPTER 1. INTRODUCTION
In Figure 1.3, the component gets older with time, the component age is
reset upon each preventive replacement.
1.4
Replacement Strategies
As we mentioned before, there are two maintenance activities: corrective
maintenance and preventive maintenance.
Barlow and Hunter examined optimal use of preventive maintenance in their
model in 1960 . We summarize Barlow and Hunter model as:
h(t
0
)
T
0
[1
- F (t)]dt - F (t
0
) =
1
C
k
C
f
- 1
,
(1.5)
where
h(t) =
f (t)
1
-F (t)
f (t) is density function for F (t)
F (t) is error probable function for the component
C
k
indicates to corrective maintenance
C
f
indicates to preventive maintenance
C
f
< C
k
Although Eq. 1.5 provides a simple solution for optimization of fix interval
replacement times, the assumptions in the construction of the underlying
model are actually quite limited and subsequently fails to capture many real
world problems in sufficient detail.
Wang classified Barlow and Hunter's model in 2002 and introduced three
replacement strategies which are: minimal repair, imperfect reparation and
perfect reparation .
In this section we review different strategies for replacement and select one
replacement strategy for our project.
The variables in table 1.1 are used in our set of possible maintenance strate-
gies.
16

CHAPTER 1. INTRODUCTION
Variables
Description
C
PM
Preventive maintenance cost
C
CM
Corrective maintenance cost
Mean failure time
t
p
,2t
p
,3t
p
,...
Planned time for replacement
Table 1.1: Variable descriptions
· Strategy 1:
Component replacement occurs when a component fails, there is no
preventive maintenance for this strategy. We estimate
T
failures up
to time T . The maintenance cost here depends solely on corrective
maintenance cost, we formulate the strategy as
Cost
1
=
C
CM
· Strategy 2:
In this strategy, we replace the component at planned time t
p
regardless
of the component age. We also replace a component whenever it breaks
down. This strategy can be formulated as
Cost
2
=
C
P M
+ C
CM
H(t
p
)
t
p
where H(t
p
) indicate the failure replacement numbers in the time inter-
val (0, t
p
). To determine H(t
p
) we used renewal theory . To obtain
the above equation, Chapman and Hall used the probability density
function (PDF) for the first failure, then they took inverse Laplace
transform of the function.
· Strategy 3:
The replacement occurs in this strategy if and only if the component
age comes to the planned times, we replace also the component if we
have a breakdown as usual, we determine the cost as:
Cost
3
=
C
P M
(1
- P (t
p
)) + C
CM
P (t
p
)
t
p
(1
- P (t
p
)) +
t
p
0
t(t)dt
17

CHAPTER 1. INTRODUCTION
That (t) indicates the density function and P (t
p
) is a probability of
failure before time t
p
.
If we assume the same situation as strategy 2 we are able to calculate down-
time. Assume T
p
is the downtime that occur during a planned replacement
and T
f
the downtime for replacement due to failure then:
Downtime
1
=
T
p
+ T
f
H(t
p
)
t
p
+ T
p
With the same assumption as in strategy 3 we calculate a new downtime
as:
Downtime
2
=
T
p
(1
- P (t
p
)) + T
f
P (t
p
)
(t
p
+ T
p
)(1
- P (t
p
)) +
t
p
0
t(t)dt + T
f
P (t
p
)
We accept strategy 2 applied to the component level as a better starting
point in this work. In chapter 6 we determine t
p
and calculate maintenance
costs and also ROI by using Eq. 1.3 and Eq. 1.4.
1.5
Conclusions
In this chapter we introduced some definitions in the maintenance concept.We
also described some economics terms such as `Return on Investment' and
introduced some formula to calculate it. We are going to select the best
maintenance activity for our problem based on maximizing profit in chapter
3.
In subsection 1.3.2 we illustrated different maintenance cost and suggested
some optimal point for a preventive maintenance plan. In section 1.4 we in-
troduced and analyzed various strategies for replacement and selected strat-
egy 2 for our problem according to Scania's policy. We are going to intro-
duce `component health' as a new engineering concept in the next chapter
and trying to find the consequences of preventive maintenance on the system
performance.
As future work for 'replacement strategy' part of this project we are going
to examine strategy 3 and compare the result in the different cases.
18

CHAPTER
2
Component health
2.1
Introduction
Component health status or component health is an interpretation of 'Reli-
ability', R(t), which we use in this project. As we mentioned earlier, with
a preventive maintenance, we are able to maximize ROI and also minimize
potential risks for breakdown. In this chapter we calculate the system relia-
bility.
2.2
Definitions and Functions
· Failure rate () is a frequency which indicates component failures per
time unit, it can be expressed as :
=
r
D
× H × A
f
(2.1)
where
r: number of failures
D : numbers of components tested
H : test hours per component
A
f
is the acceleration factor derived from the Arrhenius equation that
can be calculated by :
A
f
= e
E
kB
1
Tu
-
1
Tt
(2.2)
where
19

CHAPTER 2. COMPONENT HEALTH
E: Activation energy of the failure mode
k
B
: Boltzmann's Constant=8.617
× 10
-5
J/K
T
u
: Use temperature
T
t
is the test Temperature
· Mean Time To Failure () or MTTF is a standard industry value
which shows the average time to failure. MTTF calculate by :
M T T F =
1
(2.3)
· Mean Time Between Failure (MTBF) is an expected time between
failures of a system during operation, which can be calculate by :
M T BF = M T T F + M T T R
(2.4)
where MTTR is the Mean Time to Repair.
· Hazard function h(t) is a calculation of failure rate over time interval
(t
2
- t
1
)
· Failure rate function (t) shows the number of failures per unit of
time and it is related to Hazard function that its plot over time has the
same shape .
(t) =
f (t)
R(t)
(2.5)
where f (t) indicate time to first failure  and R(t) = 1
- f(t) is
reliability function.
· Degraded factor (A) The amount multiplied by mean time between
failures of a component to get the operational MTBF .
The "Bathtub curve" Figure 2.1 is a well known curve used in reliability
engineering and illustrates a particular form of the hazard function.
The bathtub curve includes three phases :
1. Failures phase: decreasing failure rate
2. Phase with constant (random) failure rate: constant failure rate
3. Wear-out failures phase: increasing failure rate
20

CHAPTER 2. COMPONENT HEALTH
Figure 2.1: The bathtub curve hazard function 
As The bathtub curve represents Mean Time To Failure is in phase with
constant failure rate that shows the predicted elapsed time between inherent
failures of a system during operation .
As we mentioned earlier, internal combustion engine cooling (cooling system)
was used as a case study in this project. We analyzed the performance of
cooling system in various areas.
To get a better understanding about the system health status, we calculate
failure rate, mean time between failure and time to fist failure by using
equations Eq. 2.1, Eq. 2.3 and Eq. 2.4.
21

CHAPTER 2. COMPONENT HEALTH
2.3
Cooling System
Cooling system is an important system in the trucks engine. The system
task is cooling continuously the engine by circulating coolant liquid
1
.
The cooling system consists of various parts such as: coolant, Idle roller,
compressor, belt tensioner, axeltapp, generator and ploy-V belt.
Figure 2.2 was taken of the Scania workshop manual and shows cooling
systems parts in the typical Scania trucks (P-,G-,R- and T-series).
Figure 2.2: Cooling system parts for chassis types P-, G-, R-, T-series
To avoid lengthy calculations and to get a better analysis for the health of
the cooling system we chose the idler roller as a representative subcompo-
nent.
1
a mixture of an antifreeze and water
22

CHAPTER 2. COMPONENT HEALTH
Idler roller consists of 10 parts:
Figure 2.3: Idler roller 
where:
Item
Description
Quantity
1
Shell
1
2
Bearing Housing
2
3
Shaft
1
4
Inner Snap Ring
2
5
Bearing
2
6
Female Labyrinth Seal
2
7
Male Labyrinth Seal
2
8
Outer Labyrinth Seal
2
9
Outer Snap Ring
2
10
Cover
2
Table 2.1: Idler Roller parts description
As our study shows 'Bearing' (part number 5) is the most vulnerable part
for failure.
Table 2.2 in the next page represents failure rate,TTFF, MTBF and time
to first failure for the idler roller, the values calculated by Eq. 2.1, Eq. 2.3
and Eq. 2.4. To compute the Idler Roller health status we used the Scania
workshop manual and warranty statistics (survival analysis).
Notation: The values in Table 2.2 has been hidden according to Scania's
security policy for confidential data.
23

CHAPTER 2. COMPONENT HEALTH
Assem
bly
P
erio
d
Num
b
er
of
failed
ch
assis
To
ta
l
T
im
e
MTBF
MTTF
F
a
ilure
Rate
Time
T
o
First
F
a
ilure
2009
-
Q
1
confiden
tial
conf.
conf.
conf.
confi.
confi.
2009
-
Q
2
confiden
tial
conf.
conf.
conf.
confi.
confi.
2009
-
Q
3
confiden
tial
conf.
conf.
conf.
confi.
confi.
2009
-
Q
4
confiden
tial
conf.
conf.
conf.
confi.
confi.
2010
-
Q
1
confiden
tial
conf.
conf.
conf.
confi.
confi.
2010
-
Q
2
confiden
tial
conf.
conf.
conf.
confi.
confi.
2010
-
Q
3
confiden
tial
conf.
conf.
conf.
confi.
confi.
2010
-
Q
4
confiden
tial
conf.
conf.
conf.
confi.
confi.
2011
-
Q
1
confiden
tial
conf.
conf.
conf.
confi.
confi.
2011
-
Q
2
confiden
tial
conf.
conf.
conf.
confi.
confi.
2011
-
Q
3
confiden
tial
conf.
conf.
conf.
confi.
confi.
2011
-
Q
4
confiden
tial
conf.
conf.
conf.
confi.
confi.
2012
-
Q
1
confiden
tial
conf.
conf.
conf.
confi.
confi.
2012
-
Q
2
confiden
tial
conf.
conf.
conf.
confi.
confi.
2012
-
Q
3
confiden
tial
conf.
conf.
conf.
confi.
confi.
2012
-
Q
4
confiden
tial
conf.
conf.
conf.
confi.
confi.
T
able
2
.2:
T
he
v
a
lues
in
this
table
has
b
een
hidden
b
y
the
request
from
S
cania
24

CHAPTER 2. COMPONENT HEALTH
2.4
The Consequences of PM on the Compo-
nent Health
In this section we calculate the consequences of PM on a mechanical system
which increases system reliability and efficiently.
We redefine the Hazard rate function as new function of reliability as :
h(t) =
-
1
R(t)
dR(t)
dt
(2.6)
where h(t) and R(t) denote hazard rate function and reliability function
respectively. According to `The dynamic reliability models for fatigue crack
growth problem' see , we can rewrite hazard function as:
h(t) =
0
+ A(R
0
- R(t))
(2.7)
where
0
and A represent initial failure rate and degraded factor respectively
and R
0
is the initial reliability.
2.5
Dynamic Reliability
Definition 4. Dynamic reliability method provides a mathematical frame-
work capable of handling interactions among components and process vari-
ables explicitly .
In other words, the dynamic reliability method represents a more realistic
image of the system, ability, risk and also safety.
In this section, we combine Eq. 2.6 and Eq. 2.7 to define a new equation for
dynamic reliability in a mechanical system. To obtain this new equation, we
use Riccati differential equation.
2.5.1
Riccati differential equation
Definition 5. The Riccati
2
equation is a nonlinear first order differential
equation which is not in the group of classical equations .
2
The equation is named after Jacopo Francesco Riccati (1676­1754)
25

CHAPTER 2. COMPONENT HEALTH
Riccati equation appears in the different areas of mathematics such as the
theory of conformal mapping , algebra and geometry.
A general form for Riccati equation can be written as
dy
dx
= p(x)y
2
+ q(x)y + r(x)
(2.8)
If r(x) = 0 then Riccati differential equation transfers to Bernoulli's princi-
ple(differential equation).
If r(x) = 0, and if we accept u(x) as a solution for the differential equation
then:
y = u +
1
z
(2.9)
differentiating in Eq. 2.9 with respect to x:
dy
dx
=
dy
dx
+
d
dx
(
1
z
) =
du
dx
-
1
z
2
dz
dx
substitute this into Eq. 2.8
du
dx
-
1
z
2
dz
dx
= p(x) u +
1
z
2
+ q(x) u +
1
z
+ r(x)
= p(x) u
2
+
2u
z
+ z
2
+ q(x) u +
1
z
+ r(x)
= p(x)u
2
+
2u
z
p(x) +
1
z
2
p(x) + q(x)u + q(x)
1
z
+ r(x)
= (p(x)u
2
+ q(x)u + r(x)) +
2u
z
p(x) +
1
z
2
p(x) +
1
z
q(x)
-
1
z
2
dz
dx
=
2u
z
p(x) +
1
z
q(x)
26

CHAPTER 2. COMPONENT HEALTH
dz
dx
=
-2uzp(x) - p(x) - zq(x)
(2.10)
then:
dz
dx
- (2up(x)) + q(x)z = -p(x)
(2.11)
as we see Eq. 2.11 is a linear differential equation.
2.5.2
Dynamic Reliability Equation
By using Riccati differential equation, we are able to create a general form
for dynamic reliability based on R(t). To obtain this we find R(t) from
Eq. 2.7:
R(t) =
1
A
{(
0
+ AR
0
)
- h(t)}
(2.12)
we substitute Eq. 2.12 into Eq. 2.6:
dR(t)
dt
=
-1
A
dh(t)
dt
(2.13)
where Eq. 2.16 is a Riccati differential equation based on h(t):
h(t) =
-
-
dh(t)
dt
A
A
(
0
+ AR
0
)
- h(t)
(2.14)
Eq. 2.14
dh(t)
dt
+ h
2
(t)
- (
0
+ AR
0
)h(t) = 0
we define h
1
(t) as a particular solution for the problem:
h
1
(t) = (
0
+ AR
0
)
27

CHAPTER 2. COMPONENT HEALTH
then
h(t) = h
1
(t) +
1
u(t)
(2.15)
to solve Eq. 2.15, we need to define an acceptable solution such as u which
satisfies the linear 2nd order ODE:
1
u(t)
=
(
0
+ AR
0
)
c(
0
+ AR
0
) exp((
0
+ AR
0
)t)
- 1
h(t) =
c(
0
+ AR
0
)
2
exp((
0
+ AR
0
)t)
c(
0
+ AR
0
) exp((
0
+ AR
0
)t)
- 1
according to Eq. 2.12 we have:
R(t) =
1
A
(
0
+ AR
0
)
-
c(
0
+ AR
0
)
2
exp((
0
+ AR
0
)t)
c(
0
+ AR
0
) exp((
0
+ AR
0
)t)
- 1
R(t) =
1
A
-(
0
+ AR
0
)
c(
0
+ AR
0
) exp((
0
+ AR
0
)t)
- 1
R(t) =
(
0
+ AR
0
)
A
1
- cA(
0
+ AR
0
) exp((
0
+ AR
0
)t)
R(t) =
R
0
(
0
+ AR
0
)
R
0
A
- cR
0
A
1
(
0
+ AR
0
) exp((
0
+ AR
0
)t)
we consider:
c =
-
0
R
0
A(
0
+ AR
0
)
then:
R(t) =
R
0
(
0
+ AR
0
)
AR
0
+
0
exp((
0
+ AR
0
)t)
(2.16)
28

CHAPTER 2. COMPONENT HEALTH
where R
0
is initial reliability,
0
is initial failure rate which can be calculated
by Eq. 2.1 and A indicate the degraded factor that can be find by fitting
Eq. 2.16 with the experimental data or simulation result. By using Eq. 2.16
the health promotion of a mechanical system can be calculated .
Eq. 2.16 introduced by Yuo-Tern Tsai and Kuo-Shong Wan in April 2004 (see
 , page 91) as a dynamic reliability equation that depicts the degraded
behavior of component. The system reliability shows how far we are getting
the particular outcome for the given input with as much less wastage as
possible. As future work we are going to use Eq. 2.16 for calculate the
system reliability after every preventive, corrective maintenance and also
inspection.
By using this approach we are able to determine the consequences of different
maintenance policies on the system health. This approach helps us to develop
our model and gives us also a better visibility of the health of a system.
2.6
Conclusions
In this chapter we introduced some new concepts such as failure rate, time
to first failure, etc. to get better understanding about the health of a compo-
nent. We selected cooling system as an example and calculated failure rate,
time to first failure, mean time to failure and mean time between failures for
the idle roller which is a subcomponent in cooling system.
In section 2.4 we analyzed the consequences of a preventive maintenance on
the component health by using Hazard rate function and reliability function.
In subsection 2.5.1 we introduced Riccati differential equation and used it in
subsection 2.5.2 to find a new formula to calculate the health promotion of
a mechanical system.
As future work for this part of this project we are going to simulate the
system reliability for a Multi-component system.
In the next chapter we are going to find the best maintenance activity for
cooling system.
29

CHAPTER
3
Multi Criteria Fuzzy Decision Making
(MCFD)
3.1
Introduction
In this chapter, we provide a decision making model for a mechanical system
based on maintenance activities and return on investment. We use the health
of system as an indicator to measure the systems performance.
The health of a system depends on the health of all the components that
make up the system. Since, there are various criteria that affect the health
on a mechanical system, our decision making process become a multi-criteria
decision making. We consider the health of a system to be between 0 and 1 in
this project. By health equal to 0 we mean that the system fails and a health
equal to 1 indicates a fully healthy system. By this consideration, we are able
to formulate our decision making problem in a fuzzy environment.
There are different multi-criteria decision making techniques such as: AHP
(The Analytical Hierarchy Process),TOPSIS (The Technique for Order of
Preference by Similarity to Ideal Solution), SAW(Simple Additive Weight-
ing), ELECTRE (Elimination er Choice Translation Reality), SMART(The
Simple Multi Attribute Rating Technique) and ANP(The Analytical Network
Process) for the problem solving. In this project we use TOPSIS technique
for decision making model under fuzzy environment.
To reach this goal we need to identify all criteria that affect the health of
the components. Some of these criteria are completely quantifiable, some
partially quantifiable, and others criteria are completely subjective. We need
also to define different maintenance activities, such as preventive, corrective,
inspective and imperfect maintenance.
30

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
The result from this chapter shows the best maintenance activities to perform
on each component, the policy of when to perform these activities is in fact
the main output from the simulation and optimization outlined in the next
chapter.
3.2
Fuzzy Set and Membership Function
A membership function indicates the degree of truth as an extension of eval-
uation. This concept was introduced by Zadeh in 1965. Fuzzy truth rep-
resents membership in vaguely defined sets. Some basic definitions of fuzzy
sets, membership function and intuitionistic fuzzy sets are reviewed by Yun
Shi  , KERRE  and Yang.
Definition 6. A fuzzy set is a pair (A, m) where A is a set and m : A
[0, 1].
For each x
A, m(x) is called the grade of membership of x in (A, m).
For a finite set A =
{x
1
, . . . , x
n
}, the fuzzy set (A, m) is often denoted by
{m(x
1
)/x
1
, . . . , m(x
n
)/x
n
}. Let x A. Then x is called fully included in
the fuzzy set (A, m) if m(x) = 1 and is called not included if m(x) = 0. The
set
{x A|m(x) > 0} is called the support of (A, m) and the set is called a
kernel. x is a fuzzy member if 0 < m(x) < 1, .
Definition 7. For any set X a membership function on X is any function
from X to the real unit interval [0, 1], the membership function which repre-
sents a fuzzy set A is denoted by
A
. For an element x of X, the value
A
(x)
is called the membership degree of x in the fuzzy set A, .
According to  we are able to model unknown information by using an
additional degree and Intuitionistic fuzzy sets (IFS)
3.3
IFS Generalize Fuzzy Sets
Definition 8. An Intuitionistic Fuzzy Set A on a universe U is defined as
an object of the following form:
A =
{(u,
A
(u),
A
(u))
| u U}, where the functions u
A
: U
[0, 1] and
v
A
: U
[0, 1] define the degree of membership and the degree of non-
membership of the element u
U in A, respectively, and for every u U we
have 0
A
(u) +
A
(u)
1, .
31

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
According to  a fuzzy set can be written as:
{(u,
A
(u), 1
-
A
(u))
| u U}
(3.1)
IFS distribute fuzzy sets for every membership function and non-membership
functions where = 1
- .
3.4
Fuzzy Implication Operators
The following table summarizes the classical binary implication:
a
b
a
b
0
0
1
0
1
1
1
0
0
1
1
1
Table 3.1: Binary implication
Definition 9. A mapping I : [0, 1]
2
[0, 1] is a fuzzy implication if it
satisfies the boundary conditions:
I(0, 0) = I(0, 1) = I(1, 1) and I(1, 0) = 0, .
A fuzzy implication can be generated by using three different approaches,
R-implications, S-implications and QL-implications. In the present paper we
use R-implications.
3.5
Inclusion Degree Function of IFS
Assume U is a finite universe and R is an implication. I
IF S
is a an inclusion
degree function of IFS if R satisfies the following conditions :
· a, b [0, 1] and a b R(a, b) = 1
· R(a, b) is non-decreasing with respect to b and non-increasing with
respect to a.
32

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
By using this definition we can write
I
IF S
(A, B) =
1
| U |
uU
[R(
A
(u),
B
(u)) + (1
- )R(
B
(u),
A
(u))],
[0, 1],
(3.2)
where
| U | is the cardinality of U which can be calculated by, ,
| U |=
uU
1 +
A
(u)
-
A
(u)
2
.
(3.3)
There are different methods to calculate an R-implication which was intro-
duced by several mathematicians. we use Lukasiewics implication:
R
L
(a, b) = min(1
- a + b, 1).
(3.4)
3.6
TOPSIS Method
The Technique for Order of Preference by Similarity to Ideal Solution (TOP-
SIS) is an analysis method that is one of the best methods for multi criteria
decision making. TOPSIS was developed by Hwang and Yoon in 1981 and
also by Yoon in 1987.
TOPSIS method is based on two main solutions: 1- The positive ideal solu-
tion which has the best attribute values 2- The negative ideal solution which
has the worst attribute values.
TOPSIS measures the geometric distance between all alternatives, positive
and negative ideal solutions and selects the best one. The best alternative
is an alternative which has the shortest distance from positive ideal solution
and also the farthest distance from the negative ideal solution .
3.6.1
The Structure of TOPSIS Method
TOPSIS method consists of six steps. We assume a decision making prob-
lem that has m alternatives and n criteria. The steps can be performed as
follow:
33

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Step 1 :
Obtain m alternatives and n criteria, create the evaluation matrix with m
rows and n columns. Obtain the intersections of alternatives and a criteria,
define it as x
ij
, standardization x
ij
as matrix (x
ij
)
m×n
.
Step 2 :
Create a set of weight for the criteria, define it as w
n
, normalize (x
ij
)
m×n
.
Step 3 :
Identify the positive ideal solution, show it as A
+
Identify the negative ideal solution show it as A
-
.
Step 4 :
Measure the distance between all criteria and A
+
, define it as: D
+
Measure the distance between all criteria and A
-
, define it as: D
-
.
Step 5 :
Determine the ranking index ( p
i
) of each alternative, calculate p
i
by:
p
i
=
D
+
(M
i
)
D
-
(M
i
) + D
+
(M
i
)
Step 6 :
Order the rank of alternatives according to proportion in step 5.
To illustrate the TOPSIS method, we assume a problem with 5 criteria and
3 alternatives:
C
1
C
2
M
1
C
3
M
2
Decision
C
4
M
3
C
5
Figure 3.1: TOPSIS illustration
As we see, every criterion affects every single alternative. With a set of
alternatives and criteria, the decision maker can make for example three
decisions as follow:
34

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Criteria
Alternatives
Decision 1
Decision 2
Decision 3
C
1
M
1
,M
2
,M
3
M
2
, M
1
M
1
M
3
C
2
M
1
,M
2
,M
3
M
3
M
1
, M
2
M
2
, M
3
C
3
M
1
,M
2
,M
3
M
1
, M
3
M
3
M
1
C
4
M
1
,M
2
,M
3
M
3
, M
2
M
2
M
2
C
5
M
1
,M
2
,M
3
M
2
M
2
M
3
Table 3.2: Decision Making by TOPSIS
Notation: A best decision at a time can be a single alternative or a combi-
nation of two or more alternatives.
3.6.2
TOPSIS Method in Multiple Criteria Fuzzy De-
cision Making
Since we consider our problem to be a multi criteria decision problem in a
fuzzy environment we define A
+
f
as a Fuzzy Positive Ideal Solution and A
-
f
as a Fuzzy Negative Ideal Solution.
We now use the TOPSIS method to calculate the distance between A
+
f
and
A
-
f
.
Assume that we have a set of criteria C and a set of alternatives M :
C =
{C
1
, C
2
, ..., C
m
}
M =
{M
1
, M
2
, ..., M
n
}
According to  we assume that the alternatives and criteria are represented
(using IF S) as:
M
1
=
{(C
1
,
1,1
,
1,1
), (C
2
,
1,2
,
1,2
), ..., (C
m
,
1,m
,
1,m
)
}
M
2
=
{(C
1
,
2,1
,
2,1
), (C
2
,
2,2
,
2,2
), ..., (C
m
,
2,m
,
2,m
)
}
..
.
M
n
=
{(C
1
,
n,1
,
n,1
), (C
2
,
n,2
,
n,2
), ..., (C
m
,
n,m
,
n,m
)
},
where
i,j
indicates the degree by which the alternative M
i
satisfies criterion
C
j
,
i,j
indicates the degree by which the alternative M
i
does not satisfy
criterion C
j
.
35

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Definition 10. A fuzzy positive ideal solution is defined as
A
+
f
=
{(C
1
, Max
{
i,1
}, min{
i,1
}),
(C
2
, Max
{
i,2
}, min{
2,m
}),
..
.
(C
m
, Max
{
i,m
}, min{
i,m
})}.
Definition 11. A fuzzy negative ideal solution is defined as
A
-
f
=
{(C
1
, min
{
i,1
}, Max{
i,1
}),
(C
2
, min
{
i,2
}, Max{
i,2
}),
..
.
(C
m
, min
{
i,m
}, Max{
i,m
})}.
To calculated the distance between alternatives A
+
f
and A
-
f
we define two
inclusion degrees as follows:
Definition 12. The inclusion degree D
+
(M
i
) of the positively ideal solution
in alternative M
i
and the inclusion degree D
-
(M
i
) of the negatively ideal
solution in alternative M
i
are respectively defined as
D
+
(M
i
) = Max(I(A
+
f
, M
i
))
(3.5)
D
-
(M
i
) = min(I(M
i
, A
-
f
)),
(3.6)
where I denotes the inclusion degree function, see Equation (2).
Definition 13. The ranking index of alternative M
i
is defined as
p
i
=
D
+
(M
i
)
D
-
(M
i
) + D
+
(M
i
)
(3.7)
where 0
p
i
1.
If there exists i
0
{1, 2, ..., n} such that p
i
0
= M ax
{p
1
, p
2
, ..., p
n
} then the
alternative M
i
0
is the best alternative, .
3.7
Problem Statement
To use MCFD for solving our problem we need to identify all criteria that
effect on the component's health.To perform this process, we analyzed various
36

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
components in Scania's technical module system with the help of engineers
at Scania R&D. In table 3.3 we classify the 15 most important criteria with
high effectivity on a mechanical system.
Number
Description
C
1
Calendar time
C
2
Mileage
C
3
C
4
Material operation
C
5
Components health status
C
6
Humidity
C
7
Temperature
C
8
C
9
C
10
Usage
C
11
Fuel quality
C
12
Driving styles
C
13
Environment
C
14
Speed
C
15
Table 3.3: The Criteria
we categorize different maintenance policies in table 3.4, with `No Action' we
mean no maintenance activity should be run at some special time-intervals.
For example for a component in the end of its remaining useful life `No
Action' is an optimal decision.
Number
Description
M
1
Corrective Maintenance
M
2
Imperfect Maintenance
M
3
Preventive Maintenance
M
4
Inspection
M
5
No Action
Table 3.4: The alternatives
37

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
3.8
A Numerical Example
To have a better understanding of MCFD, we analyse internal combustion
engine cooling (cooling system for short) as a real world example. In this
section, we try to find the best maintenance activity for the cooling system
in a typical engine.
As we mentioned in the last section, we need to define different alternatives
and also identify all criteria which have a direct effect on the health of a
component.
To define the maintenance activities, we need to study the company policies,
the customer's perspective and requirements, which depend on the company's
Let us assume that M
1
, M
2
, M
3
are three maintenance alternatives which in-
dicates corrective maintenance, imperfect maintenance and preventive main-
tenance respectively.
To identify the criteria with the highest influence we need knowledge of the
mechanical properties of the component. We use table 3.3 to choose related
criteria with the highest impact on the cooling system's health.
Let C
1
, C
2
, C
3
and C
4
be the criteria that represent mileage, temperature,
time and humidity.
As a decision maker, we want to find which of the alternatives M
i
that
best satisfy the criteria C
1
and C
2
or just C
3
, according to the customers
perspective and the company's policies.
Suppose that the relationships between alternatives and criteria are:
M
1
=
{(C
1
, (0.5, 0.6)), (C
2
, (0.5, 0.1)), (C
3
, (0.2, 0.4)), (C
4
, (0.1, 0.5))
}
M
2
=
{(C
1
, (0.5, 0.6)), (C
2
, (0.5, 0)), (C
3
, (0.3, 0.6)), (C
4
, (0.5, 0.2))
}
M
3
=
{(C
1
, (0.6, 0.2)), (C
2
, (0.4, 0.3)), (C
3
, (0.2, 0.3)), (C
4
, (0.4, 0.1))
}
To find the above values, we studied Scania's survival analysis - warranty
data. In this study we compared the cooling system efficiency in the different
regions.
To estimate the exact coefficients for these relationships we need to perform
an accurate data mining with some suitable tool such as RapidMiner.
38

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
Now we can construct the positive and negative ideal solutions:
A
+
1
=
{(C
1
, (0.6, 0.2)), (C
2
, (0.5, 0))
}
A
+
2
=
{(C
3
, (0.3, 0.3))
}
A
-
1
=
{(C
1
, (0.5, 0.6)), (C
2
, (0.4, 0.3))
}
A
-
2
=
{(C
3
, (0.2, 0.6))
}
where the fist elements in the A
+
1
and A
+
2
are maximum values and the second
elements are minimum values. It means that a positive ideal solution is a set
of elements that have maximum values and a negative ideal solution is a set
of least value.
We then calculate the inclusion degree function by using Eq. 3.2, but be-
fore that we need to calculate R
L
by using Eq. 3.4 (Lukasiewicz implica-
tion):
R
L
(
A
+
1
,
M
1
) =
C
1
min(1
- 0.6 + 0.5, 1) = 0.9 × 0.5 = 0.45
R
L
(
M
1
,
A
+
1
) = min(1
- 0.6 + 0.5, 1)
C
2
= 0.6
× (1 - 0.5) = 0.3
R
L
(
A
+
1
,
M
1
) =
C
3
min(1
- 0.5 + 0.5, 1) = 1 × 0.5 = 0.5
R
L
(
M
1
,
A
+
1
) = min(1
- 0.1 + 0, 1)
C
4
= 0.9
× (1 - 0.5) = 0.45
Note that is an optimal value between 0 and 1, we determine = 0.5 in
this example and
|U| is the cardinality of U which is |U| = 2.
By using Eq. 3.2 we have:
I(A
+
1
, M
1
) =
1
2
× (0.45 + 0.3 + 0.5 + 0.45) = 0.85
then:
M
11
M
21
M
31
I(A
+
1
, M
i1
)
0.85
0.825
0.9
I(M
i1
, A
-
1
)
0.925
0.9
0.875
Table 3.5: The inclusion degrees of A
+
1
in M
1
, M
2
and inclusion degrees of A
-
1
in M
1
, M
2
39

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
and also:
M
11
M
22
M
32
I(A
+
2
, M
i2
)
0.45
0.425
0.475
I(M
i2
, A
-
2
)
0.45
0.475
0.425
Table 3.6: The inclusion degrees of A
+
2
in M
1
, M
2
and inclusion degrees of A
+
2
in M
1
, M
2
By using Eq. 3.5 and Eq. 3.6 we can calculate the inclusion degrees D
+
(M
i
)
and D
-
(M
i
):
D
+
(M
i
)
0.85
0.825
0.9
D
-
(M
i
)
0.45
0.475
0.425
Table 3.7: The inclusion degrees D
+
(M
i
) and D
-
(M
i
)
The ranking index of alternatives (p
i
) can be calculated by using Eq. 3.7
as:
p
1
=
0.85
0.85 + 0.45
= 0.65
p
2
=
0.825
0.825 + 0.475
= 0.634
p
3
=
0.9
0.9 + 0.425
= 0.679
As we see p
3
= 0.679 is the best alternative and indicates preventive main-
tenance in this case.
3.9
Conclusions
In this chapter we define our problem in fuzzy environment and used the
TOPSIS method for the decision making. As we mentioned earlier the most
advantage of using fuzzy logic is flexibility.
40

CHAPTER 3. MULTI CRITERIA FUZZY DECISION MAKING (MCFD)
We started this chapter with an introduction of fuzzy logic. We defined some
concepts in fuzzy environment in sections 3.2, 3.3, 3.4 and also section 3.5.
We fined the TOPSIS method as a suitable decision making method for our
problem and described the structure of this method in subsection 3.6.1.
By define a positive and negative ideal solution in subsection 3.6.2 we esti-
mated the best and worst case for our problem. In fact we sketched a fuzzy
frame for the problem and our goal is to have a long distance from the worst
case (negative ideal solution) and rise up to catch the best case (positive
ideal solution).
We identified fifteen criteria with direct effect on the cooling system's health
and defined some alternative as maintenance activities in section 3.7.
As future work for the decision making part of this project we are going to
examine some other multiple-criteria decision methods such as: value analysis
(VA) fuzzy, VIKOR method and weighted product model.
Until now we defined different maintenance activities, analyze the system
health and making decision to choose the best alternative. In the next chapter
we analyzed different optimization methods to find a suitable algorithm for
optimization.
41

CHAPTER
4
Optimization Methods
4.1
Introduction
In the decision making problems, it is normal to have multiple design objec-
tives and there exists different optimization methods today for solving this
kind of problems.
Most of the optimization algorithms and methods have been developed and
improved from their previous versions. As discussed in  the solutions
that generated by most of the optimization algorithms, cannot be used for
problems that have a combination of discrete and continuous variables. Those
algorithms mostly provide local optimization as final solutions.
In this chapter, we will look at different ways to solve multi objective design
optimization problems and also find the best optimization algorithm to solve
our problem that provides a global optimization.
4.2
Classical and Non-classical Optimization
Methods
We classify the optimization problems in consideration of objective functions
and constraints into four groups:
1. Unconstrained Single - Objective Optimization (USOOP)
2. Constrained Single - Objective Optimization (SOOP)
3. Unconstrained Multi - Objective Optimization (UCMOP)
4. Constrained Multi - Objective Optimization (CMOP)
42

CHAPTER 4. OPTIMIZATION METHODS
There are two approaches to solve this problem: classical optimization tech-
niques and intelligent(Non-classical) optimization techniques.
A classical optimization method is an analytical method that solves differen-
tiable functions. This method is efficient when the underlying assumptions
are fulfilled . The classical optimization techniques don't support non
differentiable optimization problem. This method cannot solve a large scale
problem and it is also sensitive to changes the parameters, which is one po-
tential disadvantage of using classical optimization method. Trust region
method and Interior point method are two well known classical optimization
methods.
The intelligent (Non classical) optimization method has been specifically de-
veloped for those cases where the classical method was not suitable, high
dimensional search or problems with many local optimizations. Since the
intelligent method investigates all possible solutions, the numbers of eval-
uations can be very high and therefore this method is applied in connec-
tion with computer experiments. The intelligent optimization method
is also able to find the optimum solution for a CMOP. Penalty function
method, Resource allocation optimization methods, Multi-objective method
and Co-evolutionary method are some well known intelligent optimization
methods.
4.3
Global Optimization
Global optimization, per definition, indicates to finding the extreme value
of a given non convex function in a certain feasible region . In most
cases, the classical optimization methods are not able to solve the global
optimization problems, because this methods usually entrap in a local op-
timization. Moreover, classical methods could neither generate nor use the
global information that needed to find the global solution.
To get a better understanding of local and global optimization, we analyze
Figure 4.1 in follow:
43

CHAPTER 4. OPTIMIZATION METHODS
Figure 4.1: Point 'A' shows a global optimization and the other points indicate a local optimization 
Point B refers to a local optimization, because it's the highest place in
their area. In other words, the peak of mountain always indicates a local
optimization. As we see in Figure 4.1, there are several mountains; the
highest mountain always is the global optimization, which is point A in this
example.
At find the global optimum point is the challenge in many cases. Metaheuris-
tic algorithms are new solvers which designed to find the global optimum for
optimization problems. Metaheuristics algorithms are able to implement a
stochastic optimization. Compared to other optimization algorithms, meta-
heuristics algorithms do not assure that a globally optimal solution can be
found on the all class of optimization problems. The final solutions that
provided by metaheuristics algorithms, are dependent on the set of random
variables .
Metaheuristics algorithms can be classified by different approaches, for exam-
ple population-based searches or categorized by the type of search strategy.
44

CHAPTER 4. OPTIMIZATION METHODS
In this project, we review Metaheuristics of population-based searches type,
to find a suitable optimization algorithm for solving our problem. Population
based metaheuristics include evolutionary algorithms, particle swarm opti-
mization (PSO), Imperialist Competitive Algorithm (ICA) and Intelligent
Water Drops algorithm (IWD) .
4.4
Evolutionary Algorithms
As we mentioned before, evolutionary algorithms (EA) are a subset of population-
based metaheuristic that inspired by biological evolutionary mechanism in
nature, such as reproduction, mutation, recombination, and selection .
We summarize EA's functions in two processes (algorithms) that work simul-
taneously:
1. Evaluation
2. Optimization
To get a better understanding of EA's functions, we analyse an example:
min f (x)
s.t.
g(x)
g
o
------------ min f(x) + max 0, 1 -
g(x)
g
0
EA tries to optimize the objective function and simultaneously tries to find a
feasible set for the solutions. In fact, we are going to find the minimum value
for f (x) and also find the value of . We illustrate the process as:
In fact, for solve this problem; we need to run two algorithms parallel with
different tasks. The first algorithm's task is finding the value of , and the
second algorithm trying to optimizes the objective function f (x).
As we see in Figure 4.2, Algorithm 1 finds some , that can be assumed as
1
...
2
...
m
.
Algorithm 2 uses and represents a set of solution such as:
{x
1
, x
2
, ...
}
Algorithm 1 recognizes the best and transforms every . After the trans-
formation, algorithm 2 produces new set of x
.
45

CHAPTER 4. OPTIMIZATION METHODS
Alg 1
. . .
m
Alg2
m
x
m
2
Alg2
2
x
2
1
Alg2
1
x
1
Figure 4.2: An illustration of Evolutionary algorithm
Note: for recognize the best by algorithm 1, an optimization process should
be performed every time.
In fact, algorithm 1 is a 'Meta-Algorithm' (external) that consists of various
members () and every members in algorithm 1 calls algorithm 2.
Genetic algorithm (GA), artificial bee colony algorithm, ant colony opti-
mization algorithms, evolution strategy (ES) and imperialist competitive al-
gorithm are some examples of evolutionary algorithms.
4.5
Conclusions
According to our problem's conditions we need to find a robust and fast
optimization algorithm for problem solving. As we reviewed in this chapter
the classical optimization algorithms are not able to solve the large-scale
optimization problems such as our problem in this work.
We compared various optimization algorithms in both classical and intelligent
optimization methods. According to the problem's conditions, limitation and
our goals the intelligent optimization methods are better suited for solving
our problem.
After an accrue pre-study we find Genetic algorithm as a suitable optimiza-
tion algorithm to solving our problem. In the next chapter we analyze Ge-
netic algorithm.
46

CHAPTER
5
Genetic Algorithm
5.1
Introduction
As we mentioned in the last chapter, genetic algorithm is a type of evolu-
tionary algorithms that was introduced by John Holland. As the other evo-
lutionary algorithm, GA uses the biological processes of reproduction and
natural selection to solve for the `fittest' solutions .
Genetic algorithm solves both constrained and unconstrained multi objective
optimization problems and it is also able to solve problems with discontinu-
ous, stochastic, highly nonlinear or ill-defined objective function . As we
explained in the last chapter, classical optimization methods are not able to
solve a wide range of redundancy allocation problem. A recent study that
published in 'Reliability Engineering and System Safety' shows that genetic
algorithm is an efficient meta-heuristic method to solving combinatorial op-
timization problems .
In this chapter, we introduce, illustrate and discuss genetic algorithm as
suitable algorithm for solving our problem.
5.2
Structure
Before we begin describing the structure of GA, we need to define the so-
lution encoding which called `chromosomes' in GA's concept. In fact, the
chromosomes are a set of parameters which propose a solution to the initial
problem and they are represented as a simple string. The design of the
chromosome and the parameters depends on the initial problem. We describe
our design for both chromosomes and parameters in chapter 5.
47

CHAPTER 5. GENETIC ALGORITHM
We summarize the structure of genetic algorithm as:
1. fitness function
2. initial population of chromosomes
3. selection of parents for generation of new population
4. crossover to produce next generation of chromosomes
5. random mutation
Figure 5.1 represents the structure of GA with a numerical example:
010111010110 100010101010
101101101010 011010101010
After crossover
1
st
Parent genetic code
2
st
Parent genetic code
010111010110 011010101010
101101101010 100010101010
1
st
Offspring genetic code
2
st
Offspring genetic code
Randomly chosen crossover point
Figure 5.1: Genetic code of the parents and the offspring before and after the crossover
5.3
The most important GAs application is optimization. GA is suitable method
to solving both synthetic and numerical problems such as graph coloring,
routing, partitioning and also TSP. Machine learning is also the second most
important GAs usage which can be categorized as
· Prediction and classification
· Prediction of weather
· Designing neural networks
Population genetics, Ecology (model symbiosis), Immune systems, Auto-
matic programming and filter design are also another important applications
of genetic algorithms .
48

CHAPTER 5. GENETIC ALGORITHM
By using GA, we avoid the local optimization and have a chance to finding a
global optimization solution for the problem and the solution becomes better
and better with time.
By using GA, the fitness functions are able to be changed from iteration
to iteration that allows incorporating new data in the model if it becomes
available .
GA supports the multi objective optimization which is a most important
benefit with GA. The modular genetic algorithm (MGA) separate from ap-
plication; building blocks are able to use in hybrid applications
1
which is
We use Genetic algorithm as a suitable optimization algorithm in the next
chapter.
5.4
Conclusions
The goal of this chapter was introduction to genetic algorithm. We fined
genetic algorithm as a robust, fast and suitable optimization algorithm to
optimize our problem. We analyzed the structure of genetic algorithm and
discussed some advantage of using genetic algorithm in this chapter. In the
next chapter we implement genetic algorithm in our maintenance optimiza-
tion model.
As future work for optimization part of this project we are going to ex-
amine another optimization algorithms such as: ant colony optimization
algorithm, artificial bee colony algorithm and natural evolution strategies
algorithm.
1
Is a combination elements of web application and native application
49

CHAPTER
6
Maintenance Optimization Model
6.1
Introduction
As we mentioned before, maintenance activities is at the largest level divided
into two major areas, corrective maintenance activities (CM) and preven-
tive maintenance activities (PM). Corrective maintenance is, per definition,
performed as a response to a system failure while preventive maintenance is
performed when the system is operational and to avoid future system failure.
When optimizing maintenance activities, by a maintenance plan or policy,
we seek to find the best activities to perform at each point in time, be it
PM or CM. The optimization of these activities is in large affected by their
financial implications for a specific corporation, where given two equivalent
systems (mechanical or otherwise) under similar operations may require two
quite different maintenance policies for two different corporations. A con-
cise review and analysis of different maintenance optimization models can be
found in . In the article the authors describe several models for analytical
optimization of PM policies and mention computer simulation as a good tool
whenever simplifications of systems, to make them analytically tractable,
would lead to unrealistic results. In light of this we have focused our efforts
towards a simulation approach to maintenance optimization with the benefit
of a capability to optimize more complex systems.
6.2
Simulation Framework Model
In this section we introduce a framework model for simulation of a stochastic
system, the reader may think of it in terms of a mechanical system operating
under some cooperate environment and subject to corrective and preventive
maintenance activities. Consider a discretization of time into time-steps t
50

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
and a description of a system in which all events fill up a whole number of
such time-steps. By making t sufficiently small, such a model can describe
the system with arbitrary precision. We consider a time horizon of T such
discrete time units. At each point in time the system is described in all
important aspects by a state vector S
R
M
, the current state includes the
system time (multiple of t) and any variables describing the components
of the system.
Furthermore, consider three types of events, random events which happen
stochastically depending on the evolution of the system state, plannable
events that may happen by choice depending on the current system state
and a default event that happen whenever neither a random nor a plannable
event occur. Let P =
{P
i
} be the set of plannable events, R = {R
i
} be
the set of random events and D be the default event. All events are con-
sidered as functions that does nothing else than change the current state,
P
i
, R
i
, D :
R
M
R
M
. Let r
i
be functions r
i
:
R
M
[0, 1] correspond-
ing to each random event R
i
such that r
i
(S
t
) is the probability that the
event R
i
was triggered before time t. Let p
i
be functions (called decidors)
p
i
:
R
M
{T rue, F alse} corresponding to each plannable event P
i
such
that if p
i
(S
t
) = T rue then the plannable event P
i
is triggered at time t and
if p
i
(S
t
) = F alse the event is not triggered at time t.
Decidor
Decision Methods
Decision Data
Figure 6.1: The structure of a Decidor
For the rationale of this simulation system we also assume that if several
plannable and random events compete to run at a specific time, the order in
which they are executed has limited effect on the results of the simulation.
We also assume that the plannable events, when executed, has no significant
increasing effect on the probability of the random events to be triggered in
the next time-step.
51

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.3
The Simulation Algorithm
Consider Figure 6.2 describing the major structure of our simulation algo-
rithm.
P
i
R
i
p
i
D
t = 0
t
T
r
i
¯r
i
¯
r
i
Figure 6.2: Schematic view of our simulation algorithm.
We begin by setting the current state to the initial state and in each it-
eration of the algorithm we first execute all plannable events P
i
for which
p
i
(S) = T rue, we then randomize the level ¯
r
i
at which the random events
will trigger in the future, execute the default event D, execute each random
event R
i
for which the current level exceeds the trigger level (r
i
¯r
i
), and
we iterate this until our termination-time T is reached. A more detailed de-
scription can be found in the following algorithm.
1:
S
S
initial
2:
while t
T do
3:
for all i where p
i
(S) = T rue do
4:
S
P
i
(S)
5:
end for
6:
for all i do
7:
¯
r
i
sample from uniform distribution on [r
i
(S), 1]
8:
end for
9:
S
D(S)
10:
for all i where r
i
(S)
¯r
i
do
11:
S
R
i
(S)
12:
end for
13:
end while
Note that the state S may change for subsequent i in the for-loops at lines
3 and 10 and our implementation iterates through i in increasing order, of
course the values taken of i depends on the number of plannable and random
events respectively.
52

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Normally, since we have randomly triggered events, we would need to run
this simulation a number of times, perhaps 10 to 1000 repetitions, and collect
statistics for the state variables in S as they develop over time. In our
implementation we gather the first two moments of all state variables for each
point in time, that is, their mean value and their mean square value.
6.4
Example Problem Class
The framework model and simulation algorithm in the previous sections al-
lows for easy modeling of many real-world systems. In a specific model one of
the most important decisions to make is the time discretization t. Making
it too small will increase the run-time of any simulation and making it too
large will introduce greater error in the simulation output and subsequent
decisions based on the output. Another key factor is choosing specific ways
in which to encode the decidors p
i
. One of the simplest decidors, which we
have used in this paper and specifically in our example problem class, is the
linear decidor.
A linear decidor p
i
in our framework model is a real vector p
i
R
M
such
that the outputted decision value, True or False, is equal to the truth value
of the statement p
i
· S > 1. This allows the decision to perform a plannable
event to depend to different degrees on different elements in the state vector
S. In many cases then, when modeling a maintenance system, all aspects
of the system is constructed to describe the real world system in sufficient
detail, while the decidors are the desired output to be chosen to give optimal
performance for the maintenance policy.
Since there is only one type of decidors (one decision method) in our example
problem class and data for all decidors are real vectors of the same size, we
may describe the decision process in a more compact form. Let M be the
matrix defined by
M =
p
11
p
12
p
13
· · · p
1M
p
21
p
22
p
23
· · · p
2M
p
31
p
32
p
33
· · · p
3M
..
.
..
.
..
.
. ..
..
.
p
N1
p
N2
p
N3
· · · p
N M
,
53

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
where N is the number of plannable events in the system. Furthermore,
consider the current system state S to be a column vector. Then the elements
in the vector
D = M S,
will take a value d
i
1 for each plannable event P
i
which is decided to be
performed.
Our simulation algorithm however, will not in the current version execute all
such decided plannable events, but only in effect iterate i from 1 to N and if
d
i
1 perform the event and in anticipation of the next value for i recompute
D. A variation to this method could be to calculate D, then execute the
event with greatest d
i
, then recalculate D and again pick the event with the
greatest d
i
, until all d
i
< 1, in which case the simulation would continue to
execute the default event as usual. This method will add overhead since the
matrix multiplication may need recalculation several times, but might add
extra decision intelligence by letting the decidors have different degrees of
preference for the execution of different plannable events.
In this section we provide a method for constructing members from a class
of problems, using linear decidors, such that these problems in many aspects
could be considered to describe real world maintenance activities on systems
with one or more components. Specifically, the single most important out-
putted value of the simulations is the expected profit at the end of the time
horizon T . We also allow for the components to have evolving efficiency
measures that affect the profit development over time.
Let N
c
be the number of components in or problem system, this value is a
free parameter in the class and can for example be chosen randomly. Each
such component is granted a number of independent state variables s
ij
, a
part of the total state vector S.
For each component we allow for any number of randomly triggered events
R
i
. Each such event has a probability distribution r
ij
which is a Weibull
distribution with a randomly chosen scale () and shape (k). To compute
the probability levels as functions of the state r(S) we introduce a linear de-
pendency on the independent state variables associated with the component
(a
i
· S). That is, r(S) = r(a
i
· S).
For each component we also allow for a number of dependent state variable
in a similar fashion, e
ij
. These state variables are computed from the current
independent state variables associated with the component by a function
f (S) = f (v
i
·S), where f is a randomly chosen Weibull distributions and v
i
is
54

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
suitable randomly chosen dependency vector. We interpret these dependent
variables as a measure of efficiency for some function of the component which
soon will be seen to affect simulated profit.
What remains in our example class is to define, actually parameterize, the
actual event functions P
i
, R
i
, D.
We generate the plannable and the random events P
i
, R
i
in the same fashion:
S
MS + w, where M is a matrix of suitable dimensions and w a vector,
both chosen randomly such that the new values for the dependent state
values associated with the corresponding component are free to change, but
only dependent on the old values for these state variables. The only other
state variables allowed to change is the time t and the current profit, time
must be advanced by an integer amount and the profit must be increased
by a constant term. The default event D is somewhat different. It may add
a constant term to any dependent state for any component s, the current
profit must increase by addition of a term p and may also decrease linearly
by a factor c
e
multiplied by any dependent state variable (efficiency measure)
and the time must increase by 1 (t).
Simulation
Decidor
P
i
R
i
D
Fitness
Figure 6.3: Illustration of the model
6.5
A Numerical Example
We selected the following example system, randomized from our stated prob-
lem class. The number of components is N
c
= 2. Horizon time is T = 504.
The first component has two random events (failures) and one independent
state variable. The second component has only one random event and also
55

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
one independent state variable. Both components have two efficiency mea-
sures.
Component
Event
k
a
M w t
p
1
1
43.385
1.6374
0.33321
0
0
4
-7.8147
1
2
66.813
1.5975
0.38519
0
0
4
-8.8816
2
1
66.487
1.1860
0.30687
0
0
2
-9.8577
Table 6.1: Parameters defining random events.
Component
Event
M w t p
1
1
0
0
1
-1
1
2
0
0
1
-1
2
1
0
0
1
-1
Table 6.2: Parameters defining plannable events.
Component
Efficiency
k
v
1
1
504
0.55770
0.36798
1
2
504
1.2823
0.70517
2
1
504
0.85805
0.071498
2
2
504
0.51121
0.34842
Table 6.3: Parameters defining efficiency measures.
p
c
e
s
1
(
-0.0096735, -0.69229, -0.77131, -0.71269) (0.21124, 0.78309)
Table 6.4: Parameters defining the default event.
56

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Figure 6.4: Results from optimization of linear decidors in the example problem.
In Figure 6.4 the expected value for the relevant state variables from 2000
iterations of a simulation of the example problem is shown.
The linear decidors of the system (plannable events) was optimized using a
genetic algorithm with the expected profit at the time horizon (T ) as fitness
function. All 6 graphs are rescaled to lie between 0 and 1 to allow placement
in the same figure.
The two green curves shows the two independent state variables (one for
each component). The red spikes represent a high probability of having
a plannable event at those points in time.
Where there is an increased
probability of the plannable events to occur there is also a visible drop in the
corresponding state variable because M and w are both 0 for all plannable
events, efficiently resetting the states (component age) to zero.
The fitness can be seen to increase approximately linearly, which is a common
feature of maintenance planning simulations. As can be seen, the red spikes,
denoting a high probability for plannable events, decrease in height at later
times. This is likely a result from the increased number of random events that
has occurred on average at later times, causing a decreased determinability
of the system.
Figure 6.5 in the next page shows the convergence of the expected profit
at the end of horizon T for the genetic algorithm used. We begin with an
expected profit of 377 and after 272 generations of the optimization we have
reached a fairly long plateau beginning at a profit of 421 and we terminate
the algorithm after 512 generations with a profit of 423.
57

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Figure 6.5 shows convergence of expected profit at the end of horizon T for
the genetic algorithm used to optimize the linear decidors in the example
problem.
Figure 6.5: Convergence of expected profit.
We summarize our algorithm as a flowchart in the next page. This approach
allows a better understanding of the process.
58

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
Start
Initialize system
Time
T
M ax
?
Stop
For each plannable event:
Perform event it the
events decidor returns true
For each random event:
Generate a random level
for the cumulative failure
distribution at which point
the component will fail
Perform default event
YES
For each random event:
Perform the event if
cumulative probabil-
ity
random gener-
ated cumulative level
NO
59

CHAPTER 6. MAINTENANCE OPTIMIZATION MODEL
6.6
Conclusions
In this chapter we introduced our maintenance optimization model.
We
defined three different events: random, plannable and default events with
different tasks. We also defined the decidor for each plannable event and
described the structure and function of a decidor. In section 6.3 we illustrated
our simulation algorithm with details.
In section 6.4 we explained a specific model and introduced the linear decider
as a type of decidors and formulated the decision process in a compact form.
We finished this chapter with a numerical example by using genetic algorithm
and analyzed the result.
The main goal of this chapter was creating a simulation approach to mainte-
nance optimization with the benefit of a capability to optimize more complex
systems.
60

CHAPTER
7
Summary, Conclusions and Future
Work
7.1
Summary
As was discussed in the abstract, this work is split into two separate parts, one
dealing with decision making (chapter 3) and one dealing with maintenance
optimization model (chapter 6). The aim has been to create an optimization
model for maintenance problem. The component health has been examined
in chapter 2. Several criteria have been identified for DSS in chapter 3 and
a maintenance decision has been found. We analyzed different optimization
methods and algorithm in chapter 4 and Genetic algorithm had been chosen
as a suitable optimization algorithm (see chapter 5). The maintenance plan,
algorithm, simulation and optimization are summarized in chapter 6 and
then in section 6.5 the results of the numerical example is discussed.
7.2
Conclusions
We have provided some introduction to maintenance planning optimization.
Simulation and suitable optimization algorithms provides freedom in de-
scribing complex systems otherwise intractable by analytical methods. The
method of finding optimal decidors defined by some parameterization of the
decision logic allows for prompt establishment of a fair approximation to the
optimal use of resources. Although some additional layers are useful, per-
haps necessary, to establish a clear decision support for human consideration.
The decidors make their decision only as a function of the current state and
so provides only indirect decision patterns for planning of future activities.
61

CHAPTER 7. SUMMARY, CONCLUSIONS AND FUTURE WORK
One good way to transition from the decidors, and the statistics collected for
their decisions on a given system, is the realization that decisions closer in
time are more readily understandable and predictable. As a system evolves,
the inherent stochastic nature makes for example precise timing of mainte-
nance activities more and more difficult. In light of this we may consider
optimizing a plan for the close future in which activities are decided and
fixed, and will be performed unless unexpected events occur, in which case
a new good close future plan must be found. For a one-component system
such a short-time plan may be based on `time to first intervention', where we
for all possible times to first the plannable event consider enforcing the plan,
and in the simulations, if unexpected behavior occur, fall back on the deci-
dors to evaluate a near optimal `just in time' decision policy for the future.
This provides the decision maker with a nice single-value (profit) expected
result for planning the first intervention on different times, and may apply
simulations.
7.3
Future Work
There are three main lines of this research which should be pursued. Firstly,
the work on the criteria that effect on the component health. It is our inten-
tion to investigate thoroughly the various criteria, with a view to measuring
how different criteria affect the different components. To obtain this goal
we need to have an accurate data mining with some tools such as Rapid
Miner. A second line, which follows from chapter 2, is to investigate and cal-
culate the system reliability by the equations that we introduced and proved
in chapter 2. As a last line, we are going to examine the result by differ-
ent optimization algorithm such as artificial bee colony algorithm, Memetic
algorithms etc.
.
62

CHAPTER
8
Summary of reflection of objectives in
the thesis
We summarize our views of the objectives achieved in this thesis in this
chapter.
8.1
Objective 1 - Knowledge and Understand-
ing
In writing this thesis I had to research many journal articles in mechanical,
mathematical modeling, simulation and optimization.
In the process of writing chapter 4, I used a broad range of optimization books
as source material. Originally I compared a lot of optimization methods and
algorithms in details, and the material grew to many pages and I decided
under recommendation of my examiner and supervisor to present the main
result of the chapter in a more concise format.
At the research level, I recognized that the initial problem is a multi- objec-
tive optimization problem, and therefore I transfered the problem to a fuzzy
environment, I find 15 criteria which affect the component health. I also ana-
lyze the structure of the component and also studied a wide range of articles
in different areas of multi-criteria decision making, then I find TOPSIS as a
suitable method for DSS.
In section 2.5.1, I find Riccati differential equation as a suitable equation to
express the system reliability.
63

CHAPTER 8. SUMMARY OF REFLECTION OF OBJECTIVES IN
THE THESIS
8.2
Objective 2 -Methodological Knowledge
In writing this thesis I find two different approaches for decision making.
The first approach is a combination of fuzzy decision making and TOP-
SIS method. As second approach, I used genetic algorithm for optimiza-
tion.
For the reader get a better understanding about the background to the chap-
ters I began every chapter with a short introduction and definitions.
8.3
Objective 3 - Critically and Systemati-
cally Integrate Knowledge
The sources that I used in the thesis was obtained by searches at M¨
alardalen
University computer system and also journal articles which I found in the
University library and through Google search.
I divided the thesis in three different phases: pre-study, execution and im-
maintenance policies and optimization. I identified the problem and indica-
tors in these phases.
The execution phase contains design, make and test the initial algorithm
with Matlab. The implementation phase, is a collaboration between me and
Jonas ¨
Osterberg, the initial algorithm was implemented in JavaScript.
8.4
Objective 4 - Independently and Creatively
In section 2.5.1 I was used Riccati differential equation to finding Eq. 2.16
which is a combination of two equations that was introduced by Yuo-Tern
Tsai and Kuo-Shong.
The numerical example in section 3.8, is defined by the author. I identified 15
criteria which affect on health of component. To calculate the relationship
between criteria and alternatives, I used my knowledge in solid mechanics
and strength of materials and I also discussed about the results with senior
mechanical engineers in the maintenance department at Scania.
64

CHAPTER 8. SUMMARY OF REFLECTION OF OBJECTIVES IN
THE THESIS
8.5
Objective 5 - Present and Discuss Con-
clusions and Knowledge
I believe that this thesis is understandable for readers who have a moderate
familiarity with English and moderate knowledge in optimization topics. The
reader need to have a good knowledge in DSS to understand the different
methods that we used in this project for decision making (such as TOPSIS
and fuzzy decision making).
I assigned chapter 1to definitions and terms and also tried to define all con-
cepts which are required to understand the topic. In this project, I also bring
some numerical example to get a better understanding.
I explained the future work in the conclusions section, which is to explore
the use of data mining with Rapid Miner to get a more accurate model for
the health of components.
8.6
Objective 7 - Scientific, Social and Ethi-
cal Aspects
This thesis was a part of the IRIS-project, (in Swedish: Integrerat dynamiskt
prognostiserande underh°
allsst¨
od) which is a five-year project in Scania's Re-
search and Development department in Sweden. The approval process for
this project at Scania involves review and approval at each phase of project
development.
To experience the results, we had scheduled meeting with the IRIS-project
team every week and also with my supervisor Dr. Jonas Biteus who is project
I also presented this project in the ICNPAA 2014 Congress  in Norway
(Mathematical problems in engineering, aerospace and sciences). The at this
point unpublished congress article has the title: ` Solving complex mainte-
nance planning optimization problems using stochastic simulation and multi-
criteria fuzzy decision making ` and was written under supervision of profes-
sor Sergei Silvesterov, Jonas ¨
Osterberg and Dr. Jonas Biteus.
65

APPENDIX
A
Java code - Event Triggers
In this appendix we list the three types of event triggers used in our simula-
tions.
Trigger: The superclass describing a decision to perform an event at a
specific time. The prepare method is called at the beginning of every event
dispatch iteration and allows the implementors of this class to adjust their
behavior depending on a change in system state. The decide method is called
whenever this trigger is asked if its payload event should be executed. The
mutate and mate methods are used by the genetic algorithm for constructing
new triggers from one or two ancestors, leaving the specifics of the mutations
up to the implementing classes.
AlwaysTrigger: Represents the decision to perform the event whenever it is
asked to decide. This is suitable for "default" events, representing continuous
normal system operation.
StochasticTrigger: This trigger offers stochastic behavior due to a change
in the value method from the prepare phase to the decide phase.
LinearTrigger: This trigger returns a True decision if the linear combina-
tion of the state variables exceeds 1. The coefficients for the linear combina-
tion are stored in the val array, and these values will change on a mutate or
mate method call.
66

APPENDIX A. JAVA CODE - EVENT TRIGGERS
1
package com . vuebe . vhen ;
2
3
i m p o r t
java . util . R a n d o m ;
4
5
p u b l i c a b s t r a c t
class
T r i g g e r {
6
p u b l i c s t a t i c R a n d o m r a n d o m = new R a n d o m () ;
7
public Event event ;
8
9
p u b l i c a b s t r a c t void prepare ( State state ) ;
10
11
p u b l i c a b s t r a c t
b o o l e a n
decide ( State state ) ;
12
13
p u b l i c T r i g g e r m u t a t e ( d o u b l e p ) {
14
r e t u r n
this
;
15
}
16
17
p u b l i c T r i g g e r mate ( Trigger b , d o u b l e i n t e r p o l ) {
18
r e t u r n
this
;
19
}
20
21
p u b l i c T r i g g e r r a n d o m () {
22
r e t u r n
this
;
23
}
24
}
25
26
27
28
package com . vuebe . vhen ;
29
30
p u b l i c
class
A l w a y s T r i g g e r e x t e n d s T r i g g e r {
31
32
public A l w a y s T r i g g e r ( Event event ) {
33
this
. event = event ;
34
}
35
36
@ O v e r r i d e
37
public void prepare ( State state ) {
38
}
39
40
@ O v e r r i d e
41
p u b l i c
b o o l e a n
decide ( State state ) {
42
return true ;
43
}
44
45
}
46
47
48
package com . vuebe . vhen ;
49
67

APPENDIX A. JAVA CODE - EVENT TRIGGERS
50
p u b l i c a b s t r a c t
class
S t o c h a s t i c T r i g g e r e x t e n d s T r i g g e r {
51
p r i v a t e d o u b l e v a l u e ;
52
p r i v a t e d o u b l e t r i g g e r v a l u e ;
53
54
p u b l i c S t o c h a s t i c T r i g g e r ( Event event ) {
55
this
. event = event ;
56
}
57
58
p u b l i c a b s t r a c t d o u b l e v a l u e ( St a t e s t a t e ) ;
59
60
@ O v e r r i d e
61
public void prepare ( State state ) {
62
value = value ( state ) ;
63
if ( value < 0)
64
value = 0;
65
if ( value > 1)
66
value = 1;
67
t r i g g e r v a l u e = value + r a n d o m . n e x t D o u b l e () * (1 - value ) ;
68
}
69
70
@ O v e r r i d e
71
p u b l i c
b o o l e a n
decide ( State state ) {
72
value = value ( state ) ;
73
if ( value < 0)
74
value = 0;
75
if ( value > 1)
76
value = 1;
77
r e t u r n value >= t r i g g e r v a l u e ;
78
}
79
}
80
81
82
83
package com . vuebe . vhen ;
84
85
i m p o r t
java . util . R a n d o m ;
86
87
i m p o r t
com . vuebe . util . Util ;
88
89
p u b l i c
class
L i n e a r T r i g g e r e x t e n d s T r i g g e r {
90
p u b l i c d o u b l e [] val ;
91
92
93
public L i n e a r T r i g g e r ( Event event , d o u b l e [] val , d o u b l e []
94
this
. event = event ;
95
this
. val = val ;
96
this
97
}
68

APPENDIX A. JAVA CODE - EVENT TRIGGERS
98
99
p ub l ic L i n e a r T r i g g e r ( Ev e nt event , d o u b l e [] val ) {
100
this
. event = event ;
101
this
. val = val ;
102
this
. mask = new double [ val . length ];
103
s e t M a s k () ;
104
}
105
106
p ub l i c L i n e a r T r i g g e r ( Ev e nt event , State S ) {
107
this
. event = event ;
108
this
. val = new d o u b l e [ S . val . l e n g t h ];
109
this
. mask = new double [ val . length ];
110
s e t M a s k () ;
111
}
112
113
public void setM a s k () {
114
for ( int i = 0; i < mask . length ; i ++)
115
mask [ i ] = 1;
116
}
117
118
public void setM a s k ( int i ) {
119
mask [ i ] = 1;
120
}
121
122
public void c l e a r M a s k () {
123
for ( int i = 0; i < mask . length ; i ++)
124
mask [ i ] = 0;
125
}
126
127
public void c l e a r M a s k ( int i ) {
128
mask [ i ] = 0;
129
}
130
131
@ O v e r r i d e
132
p u b l i c L i n e a r T r i g g e r m u t a t e ( d o u b l e p ) {
133
d o u b l e [] mal = new d o u b l e [ val . l e n g t h ];
134
for ( int i = 0; i < mal . l e n g t h ; i ++) {
135
if ( r a n d o m . n e x t D o u b l e () < p ) {
136
d o u b l e v = val [ i ];
137
mal [ i ] = v + ( random . n e x t D o u b l e () * 2 - 1) * mask [ i ];
138
if ( mal [ i ] < 0)
139
mal [ i ] = 0;
140
} else {
141
mal [ i ] = val [ i ];
142
}
143
}
144
r e t u r n new L i n e a r T r i g g e r ( event , mal , mask ) ;
145
}
146
69

APPENDIX A. JAVA CODE - EVENT TRIGGERS
147
@ O v e r r i d e
148
p u b l i c T r i g g e r mate ( Trigger b , d o u b l e i n t e r p o l ) {
149
if ( b i n s t a n c e o f L i n e a r T r i g g e r ) {
150
L i n e a r T r i g g e r B = ( L i n e a r T r i g g e r ) b ;
151
d o u b l e [] va = val ;
152
d o u b l e [] vb = B . val ;
153
d o u b l e [] vm = new d o u b l e [ va . l e n g t h ];
154
for ( int i = 0; i < vm . l e n g t h ; i ++) {
155
d o u b l e v1 = va [ i ];
156
d o u b l e v2 = vb [ i ];
157
vm [ i ] = v1 + ( v2 - v1 ) * i n t e r p o l ;
158
}
159
r e t u r n new L i n e a r T r i g g e r ( event , vm , mask ) ;
160
161
} else
162
throw
new R u n t i m e E x c e p t i o n (
" I n v a l i d T r i g g e r type
c o m b i n a t i o n . "
) ;
163
}
164
165
p u b l i c T r i g g e r r a n d o m () {
166
d o u b l e [] vm = new d o u b l e [ val . l e n g t h ];
167
for ( int i = 0; i < vm . l e n g t h ; i ++) {
168
vm [ i ] = mask [ i ] * random . n e x t D o u b l e () * 0.001;
169
}
170
r e t u r n new L i n e a r T r i g g e r ( event , vm , mask ) ;
171
}
172
173
@ O v e r r i d e
174
public void prepare ( State state ) {
175
}
176
177
@ O v e r r i d e
178
p u b l i c
b o o l e a n
decide ( State state ) {
179
d o u b l e sum = 0;
180
for ( int i = 0; i < val . l e n g t h ; i ++) {
181
sum += val [ i ] * state . val [ i ];
182
}
183
r e t u r n sum >= 1;
184
}
185
186
p u b l i c S t r i n g t o S t r i n g () {
187
r e t u r n
" val "
+ Util . t o S t r i n g ( val ) +
+ Util .
t o S t r i n g ( mask ) ;
188
}
189
}
70

APPENDIX
B
Java code - Genetic Algorithm
In this appendix we list the Java code for the genetic algorithm used in our
simulations.
Individual: The genetic algorithm implementation will consider a popula-
tion of individuals, but the specifics of these individuals are subject to cus-
tomization by implementing this class. Specifically, the evaluate method
will compute the fitness of this individual and the getValue method will
return the last evaluated fitness. The fitness function may be stochastic and
so the evaluate method may have to be called more than once.
GA: This is the main genetic algorithm implementation providing possibility
to optimize the fitness function for an individual and its mutated offspring.
The initiatePopulation method provides the algorithm with a seed indi-
vidual and from it constructs a population. This population is then evolved,
ideally to a better fitness, by the evolve method. The evolve method may
be run any number of times, and the currently best individual can then be
retrieved by the getBestIndividual method.
71

APPENDIX B. JAVA CODE - GENETIC ALGORITHM
1
p a c k a g e com . v u e b e . ga ;
2
3
p u b l i c i n t e r f a c e I n d i v i d u a l e x t e n d s C o m p a rable < Individual > {
4
5
/* * Compute the value of this i n d i v i d u a l . */
6
p u b l i c d o u b l e e v a l u a t e () ;
7
8
/* * Return the c o m p u t e d value of this i n d i v i d u a l . */
9
p u b l i c d o u b l e g e t V a l u e () ;
10
11
p u b l i c I n d i v i d u a l m u t a t e ( d o u b l e p ) ;
12
13
public I n d i v i d u a l [] mate ( I n d i v i d u a l mate ) ;
14
15
p u b l i c I n d i v i d u a l r a n d o m () ;
16
}
17
18
19
20
p a c k a g e com . v u e b e . ga ;
21
22
i m p o r t
java . util . C o l l e c t i o n s ;
23
i m p o r t
com . vuebe . vhen .*;
24
i m p o r t
java . util . R a n d o m ;
25
i m p o r t
java . util . V e c t o r ;
26
27
i m p o r t
com . vuebe . util . Util ;
28
29
p u b l i c
class
GA {
30
p u b l i c s t a t i c final int T H R E A D C O U N T = 8;
31
32
p u b l i c s t a t i c f i n a l int T O U R N A M E N T = 0;
33
p u b l i c s t a t i c final int R O U L E T T E _ W H E E L = 1;
34
35
p r i v a t e int Ne = 2;
// Elite i n d i v i d u a l s
36
p r i v a t e int Ns = 50;
// S e l e c t i o n i n d i v i d u a l s
37
p r i v a t e int N = Ne + Ns ;
// P o p u l a t i o n size N = Ne + Ns
38
private double Pmut = 0.2;
// M u t a t e p r o b a b i l i t y
39
p r i v a t e int s e l e c t i o n m e t h o d = T O U R N A M E N T ;
// S e l e c t i o n
m e t h o d
40
41
p r i v a t e int Nt = 6;
// T o u r n a m e n t size
42
p r i v a t e d o u b l e Pt = 0.5;
// T o u r n a m e n t p r o b a b i l i t y
43
44
p r i v a t e R a n d o m r a n d o m = new R a n d o m () ;
45
46
p r i v a t e Vector < Individual > p o p u l a t i o n ;
47
p r i v a t e int g e n e r a t i o n ;
48
p r i v a t e int e v a l u a t i o n s ;
72

APPENDIX B. JAVA CODE - GENETIC ALGORITHM
49
50
p u b l i c I n d i v i d u a l g e t B e s t I n d i v i d u a l () {
51
I n d i v i d u a l ind = p o p u l a t i o n . get (0) ;
52
double best = ind . g e t V a l u e () ;
53
for ( int i = 1; i < p o p u l a t i o n . size () ; i ++) {
54
I n d i v i d u a l ind2 = p o p u l a t i o n . get ( i ) ;
55
double value = ind2 . g e t V a l u e () ;
56
if ( value > best ) {
57
best = value ;
58
ind = ind2 ;
59
}
60
}
61
r e t u r n ind ;
62
}
63
64
public void s e t S e l e c t i o n S i z e ( int ns ) {
65
if ( ns < 0 || ( ns & 1) != 0)
66
throw
new R u n t i m e E x c e p t i o n () ;
67
this
. Ns = ns ;
68
this
. N = Ns + Ne ;
69
}
70
71
public void s e t E l i t e S i z e ( int ne ) {
72
if ( ne < 0)
73
throw
new R u n t i m e E x c e p t i o n () ;
74
this
. Ne = ne ;
75
this
. N = Ns + Ne ;
76
}
77
78
p u b l i c int g e t G e n e r a t i o n () {
79
r e t u r n g e n e r a t i o n ;
80
}
81
82
p u b l i c int g e t E v a l u a t i o n s () {
83
r e t u r n e v a l u a t i o n s ;
84
}
85
86
public void i n i t i a t e P o p u l a t i o n ( I n d i v i d u a l seed ) {
87
p o p u l a t i o n = new Vector < Individual >() ;
88
e v a l u a t i o n s = 0;
89
for ( int i = 0; i < N ; i ++) {
90
I n d i v i d u a l i n d i v i d u a l = seed . random () ;
91
i n d i v i d u a l . e v a l u a t e () ;
92
e v a l u a t i o n s ++;
93
p o p u l a t i o n . add ( i n d i v i d u a l ) ;
94
}
95
}
96
97
public void sort () {
73

APPENDIX B. JAVA CODE - GENETIC ALGORITHM
98
C o l l e c t i o n s . sort ( p o p u l a t i o n ) ;
99
}
100
101
public void evolve () {
102
// Util . dump (" E v o l v e ") ;
103
sort () ;
104
105
S t r i n g s =
" "
;
106
for ( int i = 0; i < p o p u l a t i o n . size () ; i ++) {
107
s += p o p u l a t i o n . get ( i ) . g e t V a l u e () +
" "
;
108
}
109
// Util . dump (" Values : " + s ) ;
110
111
Vector < Individual > n e x t p o p u l a t i o n = new Vector < Individual
>() ;
112
113
// S e l e c t i o n
114
if ( s e l e c t i o n m e t h o d == T O U R N A M E N T ) {
115
for ( int tourn = 0; tourn < Ns / 2; tourn ++) {
116
Vector < Individual > tour = new Vector < Individual >() ;
117
for ( int i = 0; i < Nt ; i ++) {
118
tour . add ( p o p u l a t i o n . get ( r a n d o m . n e x t I n t ( p o p u l a t i o n .
size () ) ) ) ;
119
}
120
C o l l e c t i o n s . sort ( tour ) ;
121
122
I n d i v i d u a l first = null ;
123
I n d i v i d u a l second = null ;
124
int f i r s t i n d e x = -1;
125
combine : while ( true ) {
126
d o u b l e P = Pt ;
127
for ( int i = Nt - 1; i >= 0; i - -) {
128
if ( i != f i r s t i n d e x && r a n d o m . n e x t D o u b l e () < P ) {
129
if ( first == null ) {
130
first = tour . get ( i ) ;
131
f i r s t i n d e x = i ;
132
} else {
133
second = tour . get ( i ) ;
134
I n d i v i d u a l [] c h i l d r e n = first . mate ( second ) ;
135
136
c h i l d r e n  = c h i l d r e n . mutate ( Pmut ) ;
137
c h i l d r e n  = c h i l d r e n . mutate ( Pmut ) ;
138
139
n e x t p o p u l a t i o n . add ( c h i l d r e n ) ;
140
n e x t p o p u l a t i o n . add ( c h i l d r e n ) ;
141
142
c h i l d r e n . e v a l u a t e () ;
143
c h i l d r e n . e v a l u a t e () ;
144
74

APPENDIX B. JAVA CODE - GENETIC ALGORITHM
145
e v a l u a t i o n s += 2;
146
147
b r e a k c o m b i n e ;
148
}
149
}
150
P = P * (1 - Pt ) ;
151
152
}
153
}
154
155
}
156
} else
157
throw
new R u n t i m e E x c e p t i o n (
" U n i m p l e m e n t e d s e l e c t i o n
m e t h o d . "
) ;
158
// Elite
159
for ( int i = 0; i < Ne ; i ++) {
160
I n d i v i d u a l ind = p o p u l a t i o n . get ( p o p u l a t i o n . size () - 1 -
i ) ;
161
ind = (( com . vuebe . vhen . System ) ind ) . clone () ;
162
ind . e v a l u a t e () ;
163
n e x t p o p u l a t i o n . add ( ind ) ;
164
}
165
166
g e n e r a t i o n ++;
167
p o p u l a t i o n = n e x t p o p u l a t i o n ;
168
}
169
}
75

APPENDIX
C
Java code - Simulation Classes
In this appendix we list the Java code for simulation classes used in our
simulations.
State: The Represents a system state consisting of a set of double values
(real numbers) in the val field. The state variables have names by the
corresponding array values in the name field. The dep array provides the
possibility to have some state variables dynamically computed as a function
of the other state variables.
Event: The superclass of all events that may occur in the simulation, the
events only result is to change the values of the state variables in the run
method.
StateFunction: Used in the State class to provide the possibility for dy-
namically computed state variables.
System: This is the main simulation class realizing the simulation algorithm.
The system is defined by the initialstate field and the set of triggers (and
the event they trigger) in the triggers field. The toMatlab method creates
a string in Matlab syntax constructing the experimental data constructed
from a number of simulations performed on this system from the simulate
method. The class also provides mutation and mating methods since the class
can be used as an individual in the genetic optimization algorithm. The main
simulation algorithm is contained in the simulateOne method, performing a
simulation of the system over the entire time horizong horizon. This method
is a bit more generally written than the algorithm described previously in the
report for experimentation purposes and easability for adaptation to more
complex scenarios. The algorithm, given a correctly ordered list of event
triggers, works equivalently except for a short time step time difference.
76

APPENDIX C. JAVA CODE - SIMULATION CLASSES
1
package com . vuebe . vhen ;
2
3
i m p o r t
java . util . H a s h t a b l e ;
4
5
p u b l i c
class
State {
6
public Hashtable < String , Integer > ind = new Hashtable <
String , Integer >() ;
7
public String [] name ;
8
p u b l i c d o u b l e [] val ;
9
p u b l i c S t a t e F u n c t i o n [] dep ;
10
11
public State ( State s ) {
12
ind = ( Hashtable < String , Integer >) s . ind . clone () ;
13
val = s . val . clone () ;
14
dep = s . dep . clone () ;
15
name = s . name ;
16
}
17
18
public State ( String ... vars ) {
19
for ( int i = 0; i < vars . length ; i ++) {
20
if ( ind . put ( vars [ i ] , i ) != null ) {
21
throw
new R u n t i m e E x c e p t i o n (
" D u p l i c a t e v a r i a b l e "
) ;
22
}
23
}
24
val = new double [ vars . length ];
25
this
. name = vars . clone () ;
26
}
27
28
p u b l i c S t a t e ( S t r i n g [] vars , S t a t e F u n c t i o n [] dep ) {
29
this
( vars ) ;
30
this
. dep = dep . clone () ;
31
if ( dep . length != vars . length )
32
throw
new R u n t i m e E x c e p t i o n (
" I n v a l i d d e p e n d e n c y f u n c t i o n
count . "
) ;
33
}
34
35
p u b l i c S t r i n g t o S t r i n g () {
36
S t r i n g s =
" "
;
37
for ( int i = 0; i < name . length ; i ++)
38
s += name [ i ] +
" = "
+ val [ i ] +
" "
;
39
r e t u r n s ;
40
}
41
42
public void c o m p u t e D e p e n d e n t () {
43
if ( dep == null )
44
r e t u r n ;
45
for ( int i = 0; i < dep . l e n g t h ; i ++) {
46
if ( dep [ i ] != null )
47
val [ i ] = dep [ i ]. value (
this
) ;
77

APPENDIX C. JAVA CODE - SIMULATION CLASSES
48
}
49
}
50
51
p u b l i c int in d e x ( S t r i n g var ) {
52
r e t u r n ind . get ( var ) ;
53
}
54
55
}
56
57
58
59
package com . vuebe . vhen ;
60
61
p u b l i c a b s t r a c t
class
Event {
62
public a b s t r a c t void run ( State state ) ;
63
}
64
65
66
67
package com . vuebe . vhen ;
68
69
p u b l i c a b s t r a c t
class
V a l u e F u n c t i o n {
70
p u b l i c a b s t r a c t d o u b l e v a l u e ( S y s t e m s y s t e m ) ;
71
}
72
73
74
75
package com . vuebe . vhen ;
76
77
p u b l i c a b s t r a c t
class
S t a t e F u n c t i o n {
78
p u b l i c a b s t r a c t d o u b l e v a l u e ( St a t e s t a t e ) ;
79
}
80
81
82
83
package com . vuebe . vhen ;
84
85
i m p o r t
java . util . R a n d o m ;
86
i m p o r t
java . util . V e c t o r ;
87
88
i m p o r t
com . vuebe . ga . I n d i v i d u a l ;
89
i m p o r t
com . vuebe . util . Util ;
90
91
p u b l i c
class
S y s t e m
i m p l e m e n t s
I n d i v i d u a l {
92
p u b l i c s t a t i c R a n d o m r a n d o m = new R a n d o m () ;
93
94
p r i v a t e State i n i t i a l s t a t e ;
95
p r i v a t e int h o r i z o n ;
96
p r i v a t e V a l u e F u n c t i o n v a l u e f u n c t i o n ;
78

APPENDIX C. JAVA CODE - SIMULATION CLASSES
97
p u b l i c Vector < Trigger > t r i g g e r s = new Vector < Trigger >() ;
98
p u b l i c Vector < String > t r i g g e r n a m e = new Vector < String >() ;
99
100
// d e r i v e d or s i m u l a t e d
101
int itime ;
102
103
p u b l i c int N ;
104
public double [][] M ;
// State Mean
105
public double [][] V ;
// State V a r i a n c e
106
public double [][] C ;
// T r i g g e r c o u n t s
107
108
p u b l i c d o u b l e v a l u e ;
109
p r i v a t e int s i m u l a t i o n c o u n t ;
110
111
p u b l i c S t r i n g t o M a t l a b () {
112
S t r i n g B u i l d e r sb = new S t r i n g B u i l d e r () ;
113
114
sb . a p p e n d (
" i t e r a t i o n s = "
+ N ) ;
115
sb . a p p e n d (
" ;\ n "
) ;
116
117
sb . a p p e n d (
" s t a t e v a r s = "
+ Util . t o M a t l a b ( i n i t i a l s t a t e . name )
) ;
118
sb . a p p e n d (
" ;\ n "
) ;
119
120
for ( int i = 0; i < M . l e n g t h ; i ++) {
121
122
sb . a p p e n d (
" M "
+ i +
" = "
123
+ Util . t o M a t l a b ( Util . m u l t i p l y N e w ( M [ i ] , 1 d / N ) ) ) ;
124
sb . a p p e n d (
" ;\ n "
) ;
125
126
sb . a p p e n d (
" V "
+ i +
" = "
127
+ Util . t o M a t l a b ( Util . m u l t i p l y N e w ( V [ i ] , 1 d / N ) ) ) ;
128
sb . a p p e n d (
" ;\ n "
) ;
129
}
130
131
sb . a p p e n d (
" M ={ "
) ;
132
for ( int i = 0; i < M . l e n g t h ; i ++) {
133
sb . a p p e n d (
" M "
+ i ) ;
134
if ( i != M . l e n g t h - 1)
135
sb . a p p e n d (
" ,"
) ;
136
}
137
sb . a p p e n d (
" }\ n "
) ;
138
139
sb . a p p e n d (
" V ={ "
) ;
140
for ( int i = 0; i < M . l e n g t h ; i ++) {
141
sb . a p p e n d (
" V "
+ i ) ;
142
if ( i != M . l e n g t h - 1)
143
sb . a p p e n d (
" ,"
) ;
144
}
79

APPENDIX C. JAVA CODE - SIMULATION CLASSES
145
sb . a p p e n d (
" }\ n "
) ;
146
147
for ( int i = 0; i < C . l e n g t h ; i ++) {
148
sb . a p p e n d (
" C "
+ i +
" = "
149
+ Util . t o M a t l a b ( Util . m u l t i p l y N e w ( C [ i ] , 1 d / N ) ) ) ;
150
sb . a p p e n d (
" ;\ n "
) ;
151
}
152
153
sb . a p p e n d (
" C ={ "
) ;
154
for ( int i = 0; i < C . l e n g t h ; i ++) {
155
sb . a p p e n d (
" C "
+ i ) ;
156
if ( i != C . l e n g t h - 1)
157
sb . a p p e n d (
" ,"
) ;
158
}
159
sb . a p p e n d (
" }\ n "
) ;
160
161
sb . a p p e n d (
" t r i g g e r n a m e ={ "
) ;
162
for ( int i = 0; i < C . l e n g t h ; i ++) {
163
sb . a p p e n d (
" '"
+ t r i g g e r n a m e . get ( i ) +
" '"
) ;
164
if ( i != C . l e n g t h - 1)
165
sb . a p p e n d (
" ,"
) ;
166
}
167
sb . a p p e n d (
" }\ n "
) ;
168
169
r e t u r n sb . t o S t r i n g () ;
170
}
171
172
public State g e t I n i t i a l S t a t e () {
173
r e t u r n i n i t i a l s t a t e ;
174
}
175
176
p u b l i c T r i g g e r g e t T r i g g e r ( int i n d e x ) {
177
r e t u r n t r i g g e r s . get ( i n d e x ) ;
178
}
179
180
public void s e t I n i t i a l S t a t e ( State S ) {
181
i n i t i a l s t a t e = S ;
182
itime = i n i t i a l s t a t e . index (
" time "
) ;
183
}
184
185
public void s e t V a l u e F u n c t i o n ( V a l u e F u n c t i o n vf ) {
186
this
. v a l u e f u n c t i o n = vf ;
187
}
188
189
public void s e t H o r i z o n ( int h o r i z o n ) {
190
this
. h o r i z o n = h o r i z o n ;
191
}
192
193
public void s e t S i m u l a t i o n C o u n t ( int s i m u l a t i o n c o u n t ) {
80

APPENDIX C. JAVA CODE - SIMULATION CLASSES
194
this
. s i m u l a t i o n c o u n t = s i m u l a t i o n c o u n t ;
195
}
196
197
p u b l i c int g e t H o r i z o n () {
198
r e t u r n h o r i z o n ;
199
}
200
201
p u b l i c int a d d T r i g g e r ( T r i g ger trigger , String name ) {
202
t r i g g e r s . add ( t r i g g e r ) ;
203
t r i g g e r n a m e . add ( name ) ;
204
return t r i g g e r s . size () - 1;
205
}
206
207
public void dump () {
208
Util . dump (
" I n i t i a l : "
+ Util . t o S t r i n g ( i n i t i a l s t a t e . val ) ) ;
209
Util . dump (
" H o r i z o n : "
+ h o r i z o n ) ;
210
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
211
Util . dump (
" T r i g g e r "
+ i +
" : "
+ t r i g g e r s . get ( i ) .
t o S t r i n g () ) ;
212
}
213
}
214
215
private void s i m u l a t e O n e () {
216
State S = new State ( i n i t i a l s t a t e ) ;
217
S . c o m p u t e D e p e n d e n t () ;
218
Random rand = new Random () ;
219
S . val [ itime ] = 0;
220
w h i l e ( S . val [ it i m e ] < h o r i z o n ) {
221
// p r e p a r e t r i g g e r s
222
for ( int i = 0; i < t r i g g e r s . size () ; i ++)
223
t r i g g e r s . get ( i ) . p r e p a r e ( S ) ;
224
// d e c i d e t r i g g e r s
225
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
226
if ( t r i g g e r s . get ( i ) . d e c i d e ( S ) ) {
227
int t0 = ( int ) Math . round ( S . val [ itime ]) ;
228
if ( t0 < h o r i z o n )
229
C [ i ][ t0 ]++;
230
231
t r i g g e r s . get ( i ) . e ve nt . run ( S ) ;
232
S . c o m p u t e D e p e n d e n t () ;
233
234
int t1 = ( int ) Math . round ( S . val [ itime ]) ;
235
for ( int t = t0 ; t < t1 && t < h o r i z o n ; t ++) {
236
for ( int p = 0; p < S . val . l e n g t h ; p ++) {
237
d o u b l e v = S . val [ p ];
238
M [ p ][ t ] += v ;
239
V [ p ][ t ] += v * v ;
240
}
241
}
81

APPENDIX C. JAVA CODE - SIMULATION CLASSES
242
}
243
}
244
}
245
}
246
247
public void s i m u l a t e () {
248
// Util . dump (" Sim ") ;
249
// dump () ;
250
M = new d o u b l e [ i n i t i a l s t a t e . val . l e n g t h ][];
251
V = new d o u b l e [ i n i t i a l s t a t e . val . l e n g t h ][];
252
C = new d o u b l e [ t r i g g e r s . size () ][];
253
for ( int i = 0; i < M . l e n g t h ; i ++) {
254
M [ i ] = new d o u b l e [ h o r i z o n ];
255
}
256
for ( int i = 0; i < V . l e n g t h ; i ++) {
257
V [ i ] = new d o u b l e [ h o r i z o n ];
258
}
259
for ( int i = 0; i < C . l e n g t h ; i ++) {
260
C [ i ] = new d o u b l e [ h o r i z o n ];
261
}
262
263
for ( N = 0; N < s i m u l a t i o n c o u n t ; N ++) {
264
s i m u l a t e O n e () ;
265
}
266
267
if ( v a l u e f u n c t i o n != null ) {
268
value = v a l u e f u n c t i o n . value (
this
) ;
269
}
270
}
271
272
p r i v a t e S y s t e m c l o n e N o T r i g g e r s () {
273
S y s t e m s = new S y s t e m () ;
274
s . s e t I n i t i a l S t a t e ( i n i t i a l s t a t e ) ;
275
s . s e t H o r i z o n ( h o r i z o n ) ;
276
s . s e t V a l u e F u n c t i o n ( v a l u e f u n c t i o n ) ;
277
s . itime = itime ;
278
s . s i m u l a t i o n c o u n t = s i m u l a t i o n c o u n t ;
279
s . t r i g g e r n a m e = t r i g g e r n a m e ;
280
r e t u r n s ;
281
}
282
283
p u b l i c S y s t e m c l o n e M u t a t e ( d o u b l e p ) {
284
S y s t e m s = c l o n e N o T r i g g e r s () ;
285
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
286
s . t r i g g e r s . add ( t r i g g e r s . get ( i ) . m u t a t e ( p ) ) ;
287
}
288
r e t u r n s ;
289
}
290
82

APPENDIX C. JAVA CODE - SIMULATION CLASSES
291
p u b l i c S y s t e m c l o n e () {
292
S y s t e m s = c l o n e N o T r i g g e r s () ;
293
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
294
s . t r i g g e r s . add ( t r i g g e r s . get ( i ) ) ;
295
}
296
r e t u r n s ;
297
}
298
299
public System [] c l o n e M a t e ( System mate ) {
300
double i n t e r p o l = random . n e x t D o u b l e () * 1.4 - 0.2;
301
S y s t e m s1 = c l o n e N o T r i g g e r s () ;
302
S y s t e m s2 = c l o n e N o T r i g g e r s () ;
303
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
304
s1 . t r i g g e r s . add ( t r i g g e r s . get ( i )
305
. mate ( mate . t r i g g e r s . get ( i ) , i n t e r p o l ) ) ;
306
s2 . t r i g g e r s . add ( t r i g g e r s . get ( i ) . mate ( mate . t r i g g e r s . get (
i ) ,
307
0.5 - i n t e r p o l ) ) ;
308
}
309
r e t u r n new S y s t e m [] { s1 , s2 };
310
}
311
312
p u b l i c S y s t e m c l o n e R a n d o m () {
313
S y s t e m s = c l o n e N o T r i g g e r s () ;
314
for ( int i = 0; i < t r i g g e r s . size () ; i ++) {
315
s . t r i g g e r s . add ( t r i g g e r s . get ( i ) . r a n d o m () ) ;
316
}
317
r e t u r n s ;
318
}
319
320
@ O v e r r i d e
321
p u b l i c int c o m p a r e T o ( I n d i v i d u a l ind ) {
322
return ( int ) Math . signum ( g e t V a l u e () - ind . g e t V a l u e () ) ;
323
}
324
325
@ O v e r r i d e
326
p u b l i c d o u b l e e v a l u a t e () {
327
s i m u l a t e () ;
328
return value ;
329
}
330
331
@ O v e r r i d e
332
p u b l i c d o u b l e g e t V a l u e () {
333
return value ;
334
}
335
336
@ O v e r r i d e
337
p u b l i c I n d i v i d u a l m u t a t e ( d o u b l e p ) {
338
r e t u r n c l o n e M u t a t e ( p ) ;
83

APPENDIX C. JAVA CODE - SIMULATION CLASSES
339
}
340
341
@ O v e r r i d e
342
public I n d i v i d u a l [] mate ( I n d i v i d u a l mate ) {
343
return c l o n e M a t e (( System ) mate ) ;
344
}
345
346
@ O v e r r i d e
347
p u b l i c I n d i v i d u a l r a n d o m () {
348
r e t u r n c l o n e R a n d o m () ;
349
}
350
}
84

APPENDIX
D
Java code - Example Problem Class
In this appendix we list the Java code for the construction and analyzis of
instanses of the example problem class, Section 6.4.
SystemFactory: The class that constructs systems dynamically and ran-
domly. Several parameters for the constructed system can be specified, such
as the number of "components" NC, the number of state variables per com-
ponent sv0 to sv1. After specifying the desired parameters the system can
be created by a call to the createSystem method.
TestFactory: The class is a main program entry point through the main
method and subsequently the run method.
A SystemFactory object is
first constructed, some parameters specifyed for the desired system and the
system is then created by the createSystem method call. This created
system is then fed as the seed Individual to a genetic algorithm by the
ga.initiatePopulation(sys); statement. The genetic algorithm is then
run 10000 times and the best individual after the optimization is saved to
file in Matlab format for analyzis and plotting in Matlab.
85

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
1
package com . vuebe . vhen ;
2
3
i m p o r t
java . util . R a n d o m ;
4
i m p o r t
java . util . V e c t o r ;
5
6
i m p o r t
com . vuebe . math . W e i b u l l D i s t r i b u t i o n ;
7
i m p o r t
com . vuebe . util . Util ;
8
9
p u b l i c
class
S y s t e m F a c t o r y {
10
p u b l i c R a n d o m r a n d o m = new R a n d o m (0) ;
11
12
// T = T1 * T2 * T3 , hours days weeks
13
p u b l i c int T ;
14
p u b l i c int T1 = 24;
15
p u b l i c int T2 = 7;
16
p u b l i c int T3 = 3;
17
18
// N u m b e r of c o m p o n e n t s
19
p u b l i c int NC ;
20
21
// n u m b e r of s t a t e v a r i a b l e s per c o m p o n e n t
22
p u b l i c int sv0 ;
23
p u b l i c int sv1 ;
24
25
// F a i l u r e Age co u n t
26
p u b l i c int FA0 ;
27
p u b l i c int FA1 ;
28
// w e i b u l l p r o b a b i l i t y d i s t r i b u t i o i n p a r a m e t e r s
29
p u b l i c d o u b l e F A s c a l e m i n ;
30
p u b l i c d o u b l e F A s c a l e m a x ;
31
p u b l i c d o u b l e F A s h a p e m i n ;
32
p u b l i c d o u b l e F A s h a p e m a x ;
33
// d e s c r i p t i o n of age d e p e n d e n c y v e c t o r s
34
p u b l i c d o u b l e F A d e p e n d e n c y m a x ;
35
36
// E f f i c i e n c y age count
37
p u b l i c int EA0 ;
38
p u b l i c int EA1 ;
39
// w e i b u l l p r o b a b i l i t y d i s t r i b u t i o i n p a r a m e t e r s
40
p u b l i c d o u b l e E A s c a l e m i n ;
41
p u b l i c d o u b l e E A s c a l e m a x ;
42
p u b l i c d o u b l e E A s h a p e m i n ;
43
p u b l i c d o u b l e E A s h a p e m a x ;
44
// d e s c r i p t i o n of age d e p e n d e n c y v e c t o r s
45
p u b l i c d o u b l e E A d e p e n d e n c y m a x ;
46
47
//
48
p u b l i c d o u b l e N E p r o f i t m i n ;
49
p u b l i c d o u b l e N E p r o f i t m a x ;
86

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
50
p u b l i c d o u b l e N E p r o f i t e f f i c i e n c y p e n a l t y m a x ;
51
p u b l i c d o u b l e N E S V a d d m a x ;
52
53
//
54
p u b l i c
b o o l e a n
useFE ;
55
p u b l i c d o u b l e F E m u l 0 ;
56
p u b l i c d o u b l e F E m u l 1 ;
57
p u b l i c d o u b l e F E a d d 0 ;
58
p u b l i c d o u b l e F E a d d 1 ;
59
p u b l i c int F E t i m e 0 ;
60
p u b l i c int F E t i m e 1 ;
61
p u b l i c d o u b l e F E f i t n e s s 0 ;
62
p u b l i c d o u b l e F E f i t n e s s 1 ;
63
//
64
65
p u b l i c
b o o l e a n
useDE ;
66
p u b l i c d o u b l e D E m u l 0 ;
67
p u b l i c d o u b l e D E m u l 1 ;
68
p u b l i c d o u b l e D E a d d 0 ;
69
p u b l i c d o u b l e D E a d d 1 ;
70
p u b l i c int D E t i m e 0 ;
71
p u b l i c int D E t i m e 1 ;
72
p u b l i c d o u b l e D E f i t n e s s 0 ;
73
p u b l i c d o u b l e D E f i t n e s s 1 ;
74
75
// c r e a t i o n s
76
p u b l i c C o m p o n e n t [] c ;
77
p u b l i c int t o t a l S V i n d e x ;
78
p u b l i c int t o t a l S V c o u n t ;
79
p u b l i c int t o t a l F A i n d e x ;
80
p u b l i c int t o t a l F A c o u n t ;
81
p u b l i c int t o t a l E A i n d e x ;
82
p u b l i c int t o t a l E A c o u n t ;
83
public int itime ;
84
p u b l i c int i f i t n e s s ;
85
p u b l i c State i n i t i a l s t a t e ;
86
p u b l i c d o u b l e N E f i t n e s s ;
87
p u b l i c d o u b l e [] N E e f f i c i e n c y f i t n e s s ;
88
p u b l i c d o u b l e [] N E S V a d d ;
89
90
public void dump () {
91
Util . dump ( i n i t i a l s t a t e . t o S t r i n g () ) ;
92
Util . dump (
" T = "
+ T ) ;
93
Util . dump (
" NC = "
+ NC ) ;
94
for ( int i = 0; i < c . l e n g t h ; i ++) {
95
Util . dump (
" C O M P O N E N T "
+ i ) ;
96
c [ i ]. dump () ;
97
}
98
// Normal Event
87

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
99
Util . dump (
" - - - - - - "
) ;
100
Util . dump (
" N E f i t n e s s = "
+ N E f i t n e s s ) ;
101
Util . dump (
" N E e f f i c i e n c y f i t n e s s = "
+ Util . t o S t r i n g (
N E e f f i c i e n c y f i t n e s s ) ) ;
102
Util . dump (
" N E S v a d d = "
+ Util . t o S t r i n g ( N E S V a d d ) ) ;
103
}
104
105
p u b l i c S y s t e m c r e a t e S y s t e m () {
106
S y s t e m sys = new S y s t e m () ;
107
T = T1 * T2 * T3 ;
108
109
sys . s e t H o r i z o n ( T ) ;
110
c = new C o m p o n e n t [ NC ];
111
for ( int i = 0; i < NC ; i ++) {
112
c [ i ] = new C o m p o n e n t () ;
113
}
114
115
Vector < String > s t a t e v a r s = new Vector < String >() ;
116
Vector < S t a t e F u n c t i o n > s t a t e f u n c = new Vector <
S t a t e F u n c t i o n >() ;
117
118
t o t a l S V i n d e x = s t a t e v a r s . size () ;
119
t o t a l S V c o u n t = 0;
120
for ( int i = 0; i < NC ; i ++) {
121
C o m p o n e n t com = c [ i ];
122
com . svindex = s t a t e v a r s . size () ;
123
for ( int j = 0; j < com . s v c o u n t ; j ++) {
124
s t a t e v a r s . add (
" s "
+ i +
" _ "
+ j ) ;
125
s t a t e f u n c . add ( null ) ;
126
}
127
t o t a l S V c o u n t += com . s v c o u n t ;
128
}
129
130
t o t a l F A c o u n t = 0;
131
t o t a l F A i n d e x = s t a t e v a r s . size () ;
132
for ( int i = 0; i < NC ; i ++) {
133
C o m p o n e n t com = c [ i ];
134
com . FAindex = s t a t e v a r s . size () ;
135
for ( int j = 0; j < com . F A c o u n t ; j ++) {
136
s t a t e v a r s . add (
" f "
+ i +
" _ "
+ j ) ;
137
s t a t e f u n c . add ( com . c r e a t e F A F u n c ( j ) ) ;
138
}
139
t o t a l F A c o u n t += com . F A c o u n t ;
140
}
141
142
t o t a l E A c o u n t = 0;
143
t o t a l E A i n d e x = s t a t e v a r s . size () ;
144
for ( int i = 0; i < NC ; i ++) {
145
C o m p o n e n t com = c [ i ];
88

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
146
com . EAindex = s t a t e v a r s . size () ;
147
for ( int j = 0; j < com . E A c o u n t ; j ++) {
148
s t a t e v a r s . add (
" e "
+ i +
" _ "
+ j ) ;
149
s t a t e f u n c . add ( com . c r e a t e E A F u n c ( j ) ) ;
150
}
151
t o t a l E A c o u n t += com . E A c o u n t ;
152
}
153
154
s t a t e v a r s . add (
" time "
) ;
155
s t a t e f u n c . add ( null ) ;
156
157
s t a t e v a r s . add (
" f i t n e s s "
) ;
158
s t a t e f u n c . add ( null ) ;
159
160
i n i t i a l s t a t e = new State ( s t a t e v a r s . t o A r r a y ( new S t r i n g )
,
161
s t a t e f u n c . t o A r r a y ( new S t a t e F u n c t i o n ) ) ;
162
itime = i n i t i a l s t a t e . index (
" time "
) ;
163
i f i t n e s s = i n i t i a l s t a t e . index (
" f i t n e s s "
) ;
164
sys . s e t I n i t i a l S t a t e ( i n i t i a l s t a t e ) ;
165
166
// Normal Event
167
sys . a d d T r i g g e r ( c r e a t e N E () ,
" n "
) ;
168
169
// F a i l u r e E v e n t s
170
if ( useFE ) {
171
for ( int i = 0; i < NC ; i ++) {
172
C o m p o n e n t com = c [ i ];
173
com . F E i n d e x = sys . t r i g g e r s . size () ;
174
for ( int j = 0; j < com . F A c o u n t ; j ++) {
175
sys . a d d T r i g g e r ( com . F E t r i g g e r [ j ] ,
" f "
+ i +
" _ "
+ j )
;
176
}
177
}
178
}
179
180
// D e c i d a b l e E v e n t s
181
if ( useDE ) {
182
for ( int i = 0; i < NC ; i ++) {
183
C o m p o n e n t com = c [ i ];
184
com . D E i n d e x = sys . t r i g g e r s . size () ;
185
for ( int j = 0; j < com . F A c o u n t ; j ++) {
186
sys . a d d T r i g g e r ( com . c r e a t e D E T r i g g e r ( j ) ,
" d "
+ i +
" _
"
+ j ) ;
187
}
188
}
189
}
190
191
// V al ue F u n c t i o n
89

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
192
sys . s e t V a l u e F u n c t i o n ( new V a l u e F u n c t i o n () {
193
@ O v e r r i d e
194
p u b l i c d o u b l e v a l u e ( S y s t e m s y s t e m ) {
195
d o u b l e [] p = s y s t e m . M [ i n i t i a l s t a t e . index (
" f i t n e s s "
) ];
196
r e t u r n p [ p . l e n g t h - 1] / s y s t e m . N ;
197
}
198
}) ;
199
200
r e t u r n sys ;
201
}
202
203
p u b l i c T r i g g e r c r e a t e N E () {
204
N E f i t n e s s = i n D o u b l e ( NEprofitmin , N E p r o f i t m a x ) ;
205
N E e f f i c i e n c y f i t n e s s = new d o u b l e [ t o t a l E A c o u n t ];
206
N E S V a d d = new d o u b l e [ t o t a l S V c o u n t ];
207
208
for ( int i = 0; i < t o t a l E A c o u n t ; i ++) {
209
N E e f f i c i e n c y f i t n e s s [ i ] = - i n D o u b l e (0 ,
N E p r o f i t e f f i c i e n c y p e n a l t y m a x ) ;
210
}
211
for ( int i = 0; i < t o t a l S V c o u n t ; i ++) {
212
N E S V a d d [ i ] = i n D o u b l e (0 , N E S V a d d m a x ) ;
213
// Util . dump (" t o t a l S V c o u n t "+ t o t a l S V c o u n t +" : "+ N E S V a d d
[ i ]) ;
214
}
215
216
return new A l w a y s T r i g g e r ( new Event () {
217
@ O v e r r i d e
218
public void run ( State state ) {
219
state . val [ itime ]++;
220
state . val [ i f i t n e s s ] += N E f i t n e s s ;
221
for ( int i = 0; i < t o t a l E A c o u n t ; i ++) {
222
state . val [ i f i t n e s s ] += N E e f f i c i e n c y f i t n e s s [ i ]
223
* state . val [ t o t a l E A i n d e x + i ];
224
}
225
for ( int i = 0; i < t o t a l S V c o u n t ; i ++) {
226
d o u b l e old = state . val [ t o t a l S V i n d e x + i ];
227
state . val [ t o t a l S V i n d e x + i ] += N E S V a d d [ i ];
228
}
229
}
230
}) ;
231
}
232
233
pub l i c int in In t ( int a , int b ) {
234
r e t u r n a + r a n d o m . n e x t I n t ( b - a + 1) ;
235
}
236
237
p u b l i c d o u b l e i n D o u b l e ( d o u b l e a , d o u b l e b ) {
238
r e t u r n a + r a n d o m . n e x t D o u b l e () * ( b - a ) ;
90

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
239
}
240
241
p u b l i c
class
C o m p o n e n t {
242
p u b l i c int s v i n d e x ;
// index in state
243
p u b l i c int s v c o u n t ;
// state - v a r i a b l e s count
244
245
p u b l i c int F A i n d e x ;
246
p u b l i c int F A c o u n t ;
// c o m p o n e n t age count
247
p u b l i c d o u b l e [] F A s c a l e ;
248
p u b l i c d o u b l e [] F A s h a p e ;
249
p u b l i c W e i b u l l D i s t r i b u t i o n [] F A f u n c ;
250
public double [][] F A d e p e n d e n c y ;
251
252
p u b l i c int F E i n d e x ;
253
public double [][] FEmul ;
254
255
p u b l i c int [] F E t i m e ;
256
p u b l i c d o u b l e [] F E f i t n e s s ;
257
p u b l i c T r i g g e r [] F E t r i g g e r ;
258
259
p u b l i c int D E i n d e x ;
260
public double [][] DEmul ;
261
262
p u b l i c int [] D E t i m e ;
263
p u b l i c d o u b l e [] D E f i t n e s s ;
264
p u b l i c L i n e a r T r i g g e r [] D E t r i g g e r ;
265
266
p u b l i c int E A i n d e x ;
267
p u b l i c int E A c o u n t ;
// e f f i c i e n c y age count
268
p u b l i c d o u b l e [] E A s c a l e ;
269
p u b l i c d o u b l e [] E A s h a p e ;
270
p u b l i c W e i b u l l D i s t r i b u t i o n [] E A f u n c ;
271
public double [][] E A d e p e n d e n c y ;
272
273
p u b l i c d o u b l e N E f i t n e s s ;
274
p u b l i c d o u b l e [] N E e f f i c i e n c y f i t n e s s ;
275
276
public void dump () {
277
Util . dump (
" F A s c a l e = "
+ Util . t o S t r i n g ( F A s c a l e ) ) ;
278
Util . dump (
" F A s h a p e = "
+ Util . t o S t r i n g ( F A s h a p e ) ) ;
279
for ( int i = 0; i < F A d e p e n d e n c y . l e n g t h ; i ++) {
280
Util . dump (
" F A d e p e n d e n c y "
+ i +
" = "
281
+ Util . t o S t r i n g ( F A d e p e n d e n c y [ i ]) ) ;
282
}
283
for ( int i = 0; i < FE m u l . l e n g t h ; i ++) {
284
Util . dump (
" FEmul "
+ i +
" = "
+ Util . t o S t r i n g ( FEmul [ i ])
) ;
285
}
286
for ( int i = 0; i < FE a d d . l en g t h ; i ++) {
91

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
287
Util . dump (
+ i +
" = "
+ Util . t o S t r i n g ( FEadd [ i ])
) ;
288
}
289
Util . dump (
" F E t i m e = "
+ Util . t o S t r i n g ( FEtime ) ) ;
290
Util . dump (
" F E f i t n e s s = "
+ Util . t o S t r i n g ( F E f i t n e s s ) ) ;
291
Util . dump (
" - - - - - - "
) ;
292
for ( int i = 0; i < DE m u l . l e ng t h ; i ++) {
293
Util . dump (
" DEmul "
+ i +
" = "
+ Util . t o S t r i n g ( DEmul [ i ])
) ;
294
}
295
for ( int i = 0; i < DE ad d . l e n g t h ; i ++) {
296
Util . dump (
+ i +
" = "
+ Util . t o S t r i n g ( DEadd [ i ])
) ;
297
}
298
Util . dump (
" D E t i m e = "
+ Util . t o S t r i n g ( DEtime ) ) ;
299
Util . dump (
" D E f i t n e s s = "
+ Util . t o S t r i n g ( D E f i t n e s s ) ) ;
300
Util . dump (
" - - - - - - "
) ;
301
Util . dump (
" E A s c a l e = "
+ Util . t o S t r i n g ( E A s c a l e ) ) ;
302
Util . dump (
" E A s h a p e = "
+ Util . t o S t r i n g ( E A s h a p e ) ) ;
303
for ( int i = 0; i < E A d e p e n d e n c y . l e n g t h ; i ++) {
304
Util . dump (
" E A d e p e n d e n c y "
+ i +
" = "
305
+ Util . t o S t r i n g ( E A d e p e n d e n c y [ i ]) ) ;
306
}
307
308
}
309
310
p u b l i c C o m p o n e n t () {
311
s v c o u n t = in I n t ( sv0 , sv1 ) ;
312
313
// F a i l u r e age
314
F A c o u n t = in I n t ( FA0 , FA1 ) ;
315
F A s c a l e = new d o u b l e [ F A c o u n t ];
316
F A s h a p e = new d o u b l e [ F A c o u n t ];
317
F A f u n c = new W e i b u l l D i s t r i b u t i o n [ F A c o u n t ];
318
for ( int i = 0; i < F A c o u n t ; i ++) {
319
F A s c a l e [ i ] = i n D o u b l e ( F A s c a l e m i n * T , F A s c a l e m a x * T )
;
320
F A s h a p e [ i ] = i n D o u b l e ( FA s h ap e m i n , F A s h a p e m a x ) ;
321
F A f u n c [ i ] = new W e i b u l l D i s t r i b u t i o n ( F A s c a l e [ i ] ,
F A s h a p e [ i ]) ;
322
}
323
F A d e p e n d e n c y = new d o u b l e [ F A c o u n t ][];
324
for ( int i = 0; i < F A c o u n t ; i ++) {
325
F A d e p e n d e n c y [ i ] = new d o u b l e [ s v c o u n t ];
326
for ( int j = 0; j < s v c o u n t ; j ++)
327
F A d e p e n d e n c y [ i ][ j ] = i n D o u b l e (0 , F A d e p e n d e n c y m a x ) ;
328
}
329
330
// E f f i c i e n c y age
92

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
331
E A c o u n t = in I n t ( EA0 , EA1 ) ;
332
E A s c a l e = new d o u b l e [ E A c o u n t ];
333
E A s h a p e = new d o u b l e [ E A c o u n t ];
334
E A f u n c = new W e i b u l l D i s t r i b u t i o n [ E A c o u n t ];
335
for ( int i = 0; i < E A c o u n t ; i ++) {
336
E A s c a l e [ i ] = i n D o u b l e ( E A s c a l e m i n * T , E A s c a l e m a x * T )
;
337
E A s h a p e [ i ] = i n D o u b l e ( EA s h ap e m i n , E A s h a p e m a x ) ;
338
E A f u n c [ i ] = new W e i b u l l D i s t r i b u t i o n ( E A s c a l e [ i ] ,
E A s h a p e [ i ]) ;
339
}
340
E A d e p e n d e n c y = new d o u b l e [ E A c o u n t ][];
341
for ( int i = 0; i < E A c o u n t ; i ++) {
342
E A d e p e n d e n c y [ i ] = new d o u b l e [ s v c o u n t ];
343
for ( int j = 0; j < s v c o u n t ; j ++)
344
E A d e p e n d e n c y [ i ][ j ] = i n D o u b l e (0 , E A d e p e n d e n c y m a x ) ;
345
}
346
347
// F a i l u r e E v e n t
348
FEmul = new d o u b l e [ F A c o u n t ][];
349
FEadd = new d o u b l e [ F A c o u n t ][];
350
F E t i m e = new int [ F A c o u n t ];
351
F E f i t n e s s = new d o u b l e [ F A c o u n t ];
352
F E t r i g g e r = new T r i g g e r [ F A c o u n t ];
353
for ( int i = 0; i < F A c o u n t ; i ++) {
354
final int index = i ;
355
F E m u l [ i ] = new d o u b l e [ s v c o u n t ];
356
F E a d d [ i ] = new d o u b l e [ s v c o u n t ];
357
for ( int j = 0; j < s v c o u n t ; j ++) {
358
FEmul [ i ][ j ] = i n D o u b l e ( FEmul0 , F E m u l 1 ) ;
359
FEadd [ i ][ j ] = i n D o u b l e ( FEadd0 , F E a d d 1 ) ;
360
}
361
F E f i t n e s s [ i ] = i n D o u b l e ( FEfitness0 , F E f i t n e s s 1 ) ;
362
F E t i m e [ i ] = in I n t ( FEtime0 , F E t i m e 1 ) ;
363
F E t r i g g e r [ i ] = new S t o c h a s t i c T r i g g e r ( new Event () {
364
@ O v e r r i d e
365
public void run ( State state ) {
366
for ( int i = 0; i < s v c o u n t ; i ++) {
367
s t a t e . val [ s v i n d e x + i ] = s t a t e . val [ s v i n d e x + i ]
368
* FEmul [ index ][ i ] + FEadd [ index ][ i ];
369
}
370
state . val [ itime ] += FEtime [ index ];
371
state . val [ i f i t n e s s ] += F E f i t n e s s [ index ];
372
}
373
}) {
374
375
@ O v e r r i d e
376
public double value ( State state ) {
377
d o u b l e sum = 0;
93

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
378
for ( int j = 0; j < s v c o u n t ; j ++)
379
sum += F A d e p e n d e n c y [ index ][ j ]
380
* st a t e . val [ s v i n d e x + j ];
381
// Util . dump (" sum "+ sum ) ;
382
r et u r n FA f u n c [ in d e x ]. va l u e ( sum ) ;
383
}
384
};
385
}
386
387
// D e c i d a b l e Event
388
DEmul = new d o u b l e [ F A c o u n t ][];
389
DEadd = new d o u b l e [ F A c o u n t ][];
390
D E t i m e = new int [ F A c o u n t ];
391
D E f i t n e s s = new d o u b l e [ F A c o u n t ];
392
D E t r i g g e r = new L i n e a r T r i g g e r [ F A c o u n t ];
393
for ( int i = 0; i < F A c o u n t ; i ++) {
394
final int index = i ;
395
D E m u l [ i ] = new d o u b l e [ s v c o u n t ];
396
D E a d d [ i ] = new d o u b l e [ s v c o u n t ];
397
for ( int j = 0; j < s v c o u n t ; j ++) {
398
DEmul [ i ][ j ] = i n D o u b l e ( DEmul0 , D E m u l 1 ) ;
399
DEadd [ i ][ j ] = i n D o u b l e ( DEadd0 , D E a d d 1 ) ;
400
}
401
D E f i t n e s s [ i ] = i n D o u b l e ( DEfitness0 , D E f i t n e s s 1 ) ;
402
D E t i m e [ i ] = in I n t ( DEtime0 , D E t i m e 1 ) ;
403
}
404
405
}
406
407
public L i n e a r T r i g g e r c r e a t e D E T r i g g e r ( final int index ) {
408
D E t r i g g e r [ index ] = new L i n e a r T r i g g e r ( new Event () {
409
@ O v e r r i d e
410
public void run ( State state ) {
411
for ( int i = 0; i < s v c o u n t ; i ++) {
412
s t a t e . val [ s v i n d e x + i ] = s t a t e . val [ s v i n d e x + i ]
413
* DEmul [ index ][ i ] + DEadd [ index ][ i ];
414
}
415
state . val [ itime ] += DEtime [ index ];
416
state . val [ i f i t n e s s ] += D E f i t n e s s [ index ];
417
}
418
} , i n i t i a l s t a t e ) ;
419
D E t r i g g e r [ index ]. c l e a r M a s k () ;
420
for ( int j = 0; j < s v c o u n t ; j ++) {
421
D E t r i g g e r [ i n d e x ]. s e t M a s k ( s v i n d e x + j ) ;
422
}
423
r e t u r n D E t r i g g e r [ in d e x ];
424
}
425
426
p u b l i c S t a t e F u n c t i o n c r e a t e F A F u n c ( int i ) {
94

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
427
r e t u r n c r e a t e F u n c ( F A d e p e n d e n c y [ i ] , F A f u n c [ i ]) ;
428
}
429
430
p u b l i c S t a t e F u n c t i o n c r e a t e E A F u n c ( int i ) {
431
r e t u r n c r e a t e F u n c ( E A d e p e n d e n c y [ i ] , E A f u n c [ i ]) ;
432
}
433
434
public S t a t e F u n c t i o n c r e a t e F u n c ( final double [] dependency
,
435
final W e i b u l l D i s t r i b u t i o n func ) {
436
r e t u r n new S t a t e F u n c t i o n () {
437
@ O v e r r i d e
438
public double value ( State state ) {
439
d o u b l e age = 0;
440
for ( int j = 0; j < s v c o u n t ; j ++) {
441
age += d e p e n d e n c y [ j ] * st a t e . val [ s v i n d e x + j ];
442
}
443
return func . value ( age ) ;
444
}
445
446
};
447
}
448
}
449
}
450
451
452
453
454
package com . vuebe . vhen ;
455
456
i m p o r t
com . vuebe . ga . GA ;
457
i m p o r t
com . vuebe . math . W e i b u l l D i s t r i b u t i o n ;
458
i m p o r t
com . vuebe . util . Util ;
459
460
p u b l i c
class
T e s t F a c t o r y {
461
public System b e s t s y s t e m = null ;
462
463
public void run () {
464
Util . dump (
" Test Factory BEGIN "
) ;
465
final S y s t e m F a c t o r y sf = new S y s t e m F a c t o r y () ;
466
sf . NC = 2;
467
468
sf . sv0 = 1;
469
sf . sv1 = 1;
470
471
sf . N E p r o f i t m i n = 1;
472
sf . N E p r o f i t m a x = 1;
473
sf . N E p r o f i t e f f i c i e n c y p e n a l t y m a x = 1;
474
sf . N E S V a d d m a x = 1;
95

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
475
476
sf . EA0 = 1;
477
sf . EA1 = 2;
478
sf . E A s c a l e m i n = 1;
479
sf . E A s c a l e m a x = 1;
480
sf . E A s h a p e m i n = 0.5;
481
sf . E A s h a p e m a x = 3;
482
sf . E A d e p e n d e n c y m a x = 1;
// p e n a l t y for i n e f f i c i e n c y
c o e f f i c i e n t max
483
484
sf . FA0 = 1;
// F a i l u r e e v e n t s min
485
sf . FA1 = 2;
486
sf . F A s c a l e m i n = 0.05;
487
sf . F A s c a l e m a x = 0.2;
488
sf . F A s h a p e m i n = 1;
489
sf . F A s h a p e m a x = 2;
490
sf . F A d e p e n d e n c y m a x = 1;
491
492
sf . useFE = true ;
493
sf . F E m u l 0 = 0;
494
sf . F E m u l 1 = 0;
495
sf . F E a d d 0 = 0;
496
sf . F E a d d 1 = 0;
497
sf . F E t i m e 0 = 2;
498
sf . F E t i m e 1 = 4;
499
sf . F E f i t n e s s 0 = -5;
500
sf . F E f i t n e s s 1 = -10;
501
502
sf . useDE = true ;
503
sf . D E m u l 0 = 0;
504
sf . D E m u l 1 = 0;
505
sf . D E a d d 0 = 0;
506
sf . D E a d d 1 = 0;
507
sf . D E t i m e 0 = 1;
508
sf . D E t i m e 1 = 1;
509
sf . D E f i t n e s s 0 = -1;
510
sf . D E f i t n e s s 1 = -1;
511
512
S y s t e m sys = sf . c r e a t e S y s t e m () ;
513
514
GA ga = new GA () ;
515
sys . s e t S i m u l a t i o n C o u n t ( 2 0 0 0 ) ;
516
ga . s e t S e l e c t i o n S i z e (10) ;
517
ga . s e t E l i t e S i z e (2) ;
518
ga . i n i t i a t e P o p u l a t i o n ( sys ) ;
519
for ( int i = 0; i < 1 0 0 0 0 ; i ++) {
520
ga . e v o l v e () ;
521
b e s t s y s t e m = ( S y s t e m ) ga . g e t B e s t I n d i v i d u a l () ;
522
Util . dump (
" N : "
+ ga . g e t E v a l u a t i o n s () +
" Value : "
96

APPENDIX D. JAVA CODE - EXAMPLE PROBLEM CLASS
523
+ b e s t s y s t e m . g e t V a l u e () ) ;
524
}
525
Util . s a v e S t r i n g ( b e s t s y s t e m . t o M a t l a b () ,
526
" C :/ U s e r s / jo g 0 4 / D r o p b o x / s c a n i a / f i l e d u m p "
,
" . m "
) ;
527
528
Util . dump (
" Test Factory END "
) ;
529
}
530
531
public static final void main ( String [] args ) {
532
new T e s t F a c t o r y () . run () ;
533
}
534
}
97

References
 Jennions, K. Integrated Vehicle Health Management, Business Case The-
ory and Practice, SAE International. USA, 2013.
 ANSI Essential Requirements: Due process requirements for American
National Standards, American National Standard Institute. 2014.
available at: [www.ansi.org/essentialrequirements]
 Venkatrarman, K. Maintenance engineering and management, Chartered
Institution of Building Services Engineers. 2007.
 Practical Application of Reliability-Centered Maintenance, by the Relia-
bility Analysis Center, issued in 2003.
 Wongmongkolrit, S. Work Force Scheduling for mixture policy of Pre-
ventive and Corrective maintenance, IEEE International Conference on
Industrial Engineering and Engineering Management. 2008.
 R. Fritzsche,J.N.D. Gupta, R. LaschOptimal prognostic distance to min-
imize total maintenance cost: The case of the airline industry, Int. J.
Production Economics 151 (2014) 76­88, 2014.
 Mettas. A, Reliability allocation and optimization for complex systems,
proceedings of the annual reliability and maintainability symposium, Insti-
tute of Electrical and Electronics Engineers. California, USA, 2000.
 Barlow.R, Hunter.L,Optimum Preventive Maintenance Policies, Volume
8 Issue 1,pp. 90-100, January-February 1960.
 Wang. H, Hoang.P, Reliability and Optimal Maintenance, Springer Series
in Reliability Engineering, 2002.
 Azar. S, Reynolds. B , Narayanan. S,Comparison of two multi-objective
optimization techniques with and within genetic algorithms, Proceedings of
the ASME Design Engineering Technical Conferences, Las Vegas, Nevad,
1999.
 Yogesh, J. Design and optimization of thermal systems, CRC Press Inc.
1998.
98

REFERENCES
 Optimization Tips & Tricks, Available from: www.math.uwaterloo.ca,
2014
 Wehrens. R, Buydens. L,Classical and Non-classical Optimization Meth-
ods, Encyclopedia of Analytical Chemistry R.A. Meyers (Ed.) pp.
9678­9689. 2000.
 Blum.
C,
Roli.
A,Metaheuristics in combinatorial optimization:
Overview and conceptual comparison 35 (3). ACM Computing Surveys.
pp. 268­308.2003.
 Talbi.
E-G,
Metaheuristics: from design to implementation.
Wiley.
2009.
 Jakudo. J, Co-evolution Algorithm and its Applications. Iba lab. M1 37-
106458, 2010.
 Mitchell. M, An Introduction to Genetic Algorithms, CRC Press Inc.
1988.
 Bauer. R, Genetic Algorithms and Investment Strategies, John Wiley
and Sons Inc. 1994.
 Ardakan. M, & Hamadani,A.Reliability optimization of series­parallel
systems with mixed redundancy strategy in subsystems, Reliability Engi-
neering and System Safety. 132­139. 2014.
 Holland. J, Langton. C , Wilson.S, Genetic Programming, Cambridge,
Massachusetts. 2013.
 Melanie. M, An Introduction to Genetic Algorithms, London, England.
1999.
 Patrick. D, O'Connor. T,
Practical Reliability Engineering, 4th ed.,
John Wiley , Sons, UK. 2010.
 Jones.J, Integrated Logistics Support Handbook,McGraw-Hill Logistics
Series, page 4.2, 2006.
 Pyzdek T,The Six Sigma Handbook, McGraw-Hill. New York, USA,
2003.
 Misra. KB, Reliability analysis and prediction, Elsevier. Amsterdam,
1992.
 Klutke. G , Kiessler. P, Wortman. M, A Critical Look at the Bathtub
Curve, IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1,
page 125-129, 2003
99

REFERENCES
 Kumar. A, Srividya. A, Reliability and Safety Engineering, Springer,
E-book, 2010.
 Strongflex,a manufacturer and exporter of conveyor belt and conveyor
accessory in China.2014.
available at: [www.conveyor-idler.com/old/HDPE-Roller-Idler.html]
 Kamboj. V, Bhardwaj. A, Mathematical model of reliability assessment
for generation system, Power Engineering and Optimization Conference
(PEDCO) Melaka,
 Ingraffea. R, Schwalbe.K, Engineering Fracture Mechanics, Cornell Uni-
versity, Ithaca, New York, USA, 2013.
 Labeaua. P, Smidts .C, Dynamic reliability: towards an integrated plat-
form for probabilistic risk assessment, Volume 68, Issue 3, Pages 219­254,
2000.
 Albert. A, Gardner. L,Stochastic Approximation and Non Linear Re-
gression, MIT Press, 2003.
 Bittanti. S, Laub. A, Willems.J, The Riccati Equation, Springer Berlin
Heidelberg, 1991.
 Labeaua. P.E, Smidts. C, Dynamic reliability: towards an integrated
platform for probabilistic risk assessment, Volume 68, Issue 3, Pages
219­254, 2000.
 Tern Tsai. Y , Wang. K, Optimizing preventive maintenance for me-
chanical components using genetic algorithms, Reliability Engineering &
System Safety Volume 74, Issue 1, Pages 89­97, 2001.
 Y.Shi , Wang. K, A Deep Study of Fuzzy Implications, Faculty of Science
of Ghent University, Pages 2-7, 2009.
 B. De Baets , E. E. Kerre, Fuzzy relational compositions , Fuzzy set
and systems 60, Pages 109-120, 1993
 J. Yang , J. Watada, fuzzy clustering analysis of data mining: Appli-
cation to an accident mining system, International Journal of Innovative
Computing, Information and Control, Pages 5715­5724, 2012
 LA. Zadeh, Fuzzy sets, Information and Control, Pages 338­353, 1965
 C. YU , Y. Luo, A Fuzzy Optimization Method for Multi-Criteria De-
cision Making Problem Based on the Inclusion Degrees of Intuitionistic
100

REFERENCES
Fuzzy Sets , Journal of Information and Computing Science, Pages 146-
152, 2008
 K. Atanassov, Intuitionistic fuzzy sets, Fuzzy Sets and Systems, Pages
87-96, 1986
 J. Fan , W. Xie, Subsethood measure: new definitions, Fuzzy Sets and
Systems, Pages 201-209, 1999
 Kulkarni. V.G, Modeling and analysis of stochastic system, Chapman &
Hall- CRC.1995.
 Yoon. K.P , Hwang. C, Multiple Attribute Decision Making: An Intro-
duction. California: SAGE publications. 1995.
 Probab, J.A, Imperfect maintenance in a generalized competing risks
framework,Volume 43, 825-839, 2006.
 Gustafsson. A, Wassberg. S, Ekonomiska konsekvenser till f¨
oljd av oplan-
oping universitet, 2013.
 Tahvili. S, ¨
Osterberg.J, Silvesterov. S, Biteus.J, Solving complex main-
tenance planning optimization problems using stochastic simulation and
multi- criteria fuzzy decision making, ICNPAA 2014 Congress, Narvik,
Norway, (in press).
101
103 of 103 pages

Details

Title
Solving Complex Maintenance Planning Optimization Problems Using Stochastic Simulation and Multi-Criteria Fuzzy Decision Making
College
Mälardalen University
A+
Author
Year
2014
Pages
103
Catalog Number
V281688
ISBN (Book)
9783656756187
File size
1126 KB
Language
English
Notes
Grade 5 (Sweden) = "excellence" (A+)
Tags
solving, complex, maintenance, planning, optimization, problems, using, stochastic, simulation, multi-criteria, fuzzy, decision, making
Quote paper
Sahar Tahvili (Author), 2014, Solving Complex Maintenance Planning Optimization Problems Using Stochastic Simulation and Multi-Criteria Fuzzy Decision Making, Munich, GRIN Verlag, https://www.grin.com/document/281688 