Ant Colony Algorithm in Fault Diagnosis


Libro Especializado, 2016

140 Páginas


Extracto


Table of Contents

General introduction

1 CONTEXT

2 ORGANIZATION OF THE MANUSCRIPT

3 CONTRIBUTIONS OF THE BOOK

Chapter 1: Fault diagnosis by pattern recognition
1 INTRODUCTION:
2 DEFINITIONS
3 THE DIFFERENT APPROACHES TO THE DIAGNOSIS
4 FAULT FINDING BY PATTERN RECOGNITION
4.1 The basic principle
4.2 Analysis of Observations:
4.3 Reduction of the dimension of the space of representation
4.4 The space of decision:
4.4.1 The hierarchical classification:
4.4.2 The classification by partition:
4.5 The procedure of decision:
4.5.1 Parametric methods:
4.5.2 Non-parametric methods:
4.5.3 Direct calculation of borders:
4.6 Operating Phase
4.6.1 The evaluation criteria
4.6.2 The methods of evaluation
5 ARTIFICIAL INTELLIGENCE TOOLS FOR THE INDUSTRIAL DIAGNOSIS
5.1 Networks of Neurons
5.2 The fuzzy logic
6 CONCLUSION:

Chapter 2: Ant Colony Optimization
1 INTRODUCTION:
2 THE ACTUAL ANTS
2.1 Basic Principle
2.1.1 The resolution of complex problems
2.1.2 Stigmergy
3 THE ARTIFICIAL ANTS
4 MULTI-AGENT SYSTEMS
4.1 The principles of databases
4.2 The agents
4.3 Architecture of agents
4.4 The Reactive architectures:
5 THE COLLECTIVE INTELLIGENCE OF ANTS
5.1 The sharing of tasks
5.2 Self-organization
5.3 The communication
5.3.1 The sound communication
5.3.2 The tactile communication
5.3.3 The visual communication
5.3.4 The chemical communication
6 THE DIFFERENT AREAS OF APPLICATION
7 ANT COLONY ALGORITHMS AND THE ADM
7.1 The algorithm AntTreeStoch
7.1.1 The broad lines
7.1.2 The main algorithm of AntTreeStoch
7.1.3 The case of an Ant on the bracket
7.1.4 The case of an ant placed on another
7.1.5 Criticism and limits of AntTreeStoch
7.1.6 The improvements
7.2 The algorithm of Lumer & Faieta
7.2.1 The broad lines
7.2.2 The distance
7.2.3 The classification
7.2.4 Criticism and limits of the basal algorithm
7.2.5 Improvement of the algorithm
7.3 The BINARY algorithm de colony of ANTS
7.3.1 The broad lines
7.3.2 Criticism and limits of the basal algorithm
7.3.3 Improvement of the algorithm
8 CONCLUSION

Chapter 3 : Support Vector Machine (SVM)
1 INTRODUCTION
2 HISTORY
3 THE SVM
3.1 The optimal hyperplane
4 SVM: PRIMAL FORMULATION
5 SVM: DUAL FORMULATION
6 THE CASE OF A SAMPLE OF NON-LINEARLY SEPARABLE
7 THE FUNCTIONS THE NUCLEI
8 HYPER-PARAMETERS
9 SVM MULTI-CLASSES
9.1 SVM: A against all
9.2 SVM: one against
9.3 DAG-SVM
10 THE REGRESSION BY SVM
11 SOFTWARE TOOLS
11.1 SVMTorch
11.2 LIBSVM
11.3 SVMLight
12 CONCLUSION

Chapter 4: The selection parameters for industrial diagnosis
1 INTRODUCTION
2 THE STEPS FOR THE SELECTION OF PARAMETERS
2.1 The evaluation criteria
2.1.1 The measures of error of classification
2.1.2 The information measures
2.1.3 The measures of consistency
2.1.4 The measures of dependence
2.1.5 Distance measurements
2.2 The methods the filters
2.3 The enveloping Methods
2.4 The integrated methods:
3 THE PROCEDURE OF GENERATION
3.1 The complete generation
3.2 The random generation
3.3 The Sequential generation
3.3.1 The method of constructive generation
3.3.2 The method of destructive generation
4 STOP CRITERION
5 THE VALIDATION
6 EXTRACTION OF THE PARAMETERS
6.1 Principle
6.2 The principal components analysis
7 SELECTION PROCEDURE PROPOSED
8 CONCLUSION

Chapter 5: Application and validation of the proposed approaches
1 INTRODUCTION
2 DESCRIPTION OF THE FIRST PROCESS
2.1 Presentation of SCIMAT
3 THE MANUFACTURE OF CEMENT
3.1 The Sintering
3.1.1 The rotary kiln
3.2 Analysis of operation of a Rotary Kiln
3.3 The parameters of the part Sintering
3.3.1 Modes of Operation studied
4 THE DESCRIPTION OF THE SECOND PROCESS
4.1 Presentation of dairy in the Aurès
4.2 Production of milk
4.3 Pasteurization
4.3.1 Principle
4.3.2 The pasteurizer
4.3.3 The main parameters of the process of pasteurization
4.3.4 Modes of Operations studied
5 THE ALGORITHM ANTTREESTOCH
5.1 Use of Netlogo
5.2 The test data
5.3 The parameters of AntTreeStoch
5.4 The evaluation measures
5.5 The Results
6 APPLICATION OF THE ALGORITHM LUMER & FAIETA
6.1 Configuration of the algorithm
6.2 Results of scenarios :
7 APPLICATION OF THE ALGORITHM HYBRID WRAPPER/FILTER-BASED ACO-SVM
7.1 The test data
7.2 The Heuristic factor FH
7.3 The configuration of the classifier
7.3.1 Adjusting the hyper-parameters
7.3.2 The results of classification
7.4 The Results
8 CONCLUSION

General Conclusion

Acknowledgments

In the first place, I would like to thank L. Hayet Mouss, Professor at the University of Batna and Director of LAP (laboratory automation and production engineering), for the confidence that she granted to me in agreeing to supervise this work, for its multiple councils and its permanent attention on the evolution of my work. Thank you also for all the proofreading, suggestions and comments, which have enabled me to improve the quality of this book. Finally, I was extremely sensitive to his human qualities of listening and of encouragement, which have played an important role.

I would like to devote a few lines to people without whom this adventure would not have likely never started: Mohamed BenMohamed, who had agreed to coach my memory of magister, Mohamed Slimane, Professor at the University of Tours, for their support when this book was still a distant project.

I address my gratitude to Hakan Cevikalp Professor at the University of Osmangazi who had the kindness to welcome me within the laboratory Machine Learning & Computer Vision.

Abstract

In this book, we propose several modules of diagnosis for complex and dynamic systems. These modules are based on the three algorithms colony of ants, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. These algorithms have been chosen for their simplicity and their vast field of application. However, these algorithms cannot be used under their basal form for the development of diagnostic modules since they have several limitations. We have also proposed several adaptations in order that these algorithms can be used in diagnostic modules. We have proposed a parallel version of the algorithm AntTreeStoch based on a reactive multi-agents system. This version allows minimizing the influence of initial sort on the outcome of classification. We have also introduced a new parameter called Sid, which allows several ants to connect to the same position, and we have modified the movements of ants by promoting the path of the ant the most similar. For the algorithm Lumer & Faieta, we have accelerated the speed of construction of classes by adding a speed setting different for each Ant. To reduce the number of movements, we have proposed a new variable that allows saving the identifiers of objects displaced by the same Ant. To improve the quality of classification, we have also added to the algorithm of the indices to report the classes trunks constructed. For the algorithm Binary ant colony, we have proposed a variant called "Hybrid wrapper/filter-based ACO-SVM". This algorithm allows the selection of parameters. It combines the techniques of filters and enveloping methods in taking advantage of the rapidity of the Fisher report and the adaptation of selected settings to the classifier SVM. It improves the quality of classification according to the data nature in the database for learning and the type of the kernel function used. It also allows adjusting the hyper­parameters of the kernel function. We tested these algorithms based on data from two industrial systems, which are the sintering system and the pasteurization system, as well on a few databases of UCI (University of California, Irvine).

Key words

Diagnosis, classification, features selection, ant colony algorithms, Multi-Agent Systems.

List of Figures

Fig. 1.1 : diagnostic position in relation to the supervision

Fig. 1.2 : Classification of Diagnostic Methods [Narvaez, 2007]

Fig. 1.3 The position of pattern recognition compared to diagnostic methods

Fig. 1.4: The process of the pattern recognition [Saint-Jean, 2001]

Fig. 1.5: Fault finding by pattern recognition [Mokhtari, 2007]

Fig. 1.6 : the estimation method by kernel

Fig. 1.7 : Structure of the network of type RBF for the diagnosis

Fig. 2.1: the colony chooses shortest path

Fig. 2.2 : the experience of the double deck

Fig. 2.3: Model of reactive agent

Fig. 2.4 : the shaft covering minimum weight of a planar graph

Fig. 2.5 : a problem of tours of vehicles with a central repository

Fig. 2.6 : A sequential execution of AntTreeStoch

Fig. 2.7 : A parallel execution of AntTreeStoch

Fig. 2.8 : A parallel execution of AntTreeStoch with The SID parameter

Fig. 2.9 : possible result for the execution of the algorithm LF on the grid

Fig. 2.10 Possible Results of the execution of LF With and Without Quick ants

Fig. 2.11 : The choice of the dimension of vector

Fig. 2.12 : The set of parameters selected by the algorithm filter based ACO [Kadri et al., 2010b]

Fig. 3.1: Classification of learning methods based on nuclei [NedjemEddine, 2004]

Fig. 3.2: separation of two sets of points by Linear separators

Fig. 3.3: The optimal hyperplane is perpendicular to the segment of the right, the more the short joining an example of learning to the hyperplane

Fig. 3.4 : Example of no-linear projection

Fig. 3.5 : Example of SVM classification multi-classes

Fig. 3.6 : architecture of a classification system based on the algorithm Un-Contre-All

Fig. 3.7 : an observation belongs to the area of ambiguity

Fig. 3.8 : A classification of three classes by SVM-dag

Fig. 3.9 : The regression by SVM

Fig. 4.1: Principle of the selection of variables [Guérif, 2006]

Fig. 4.2: The different steps of a procedure for the selection of attributes

Fig. 4.3 : Principle of extraction of variables [Guérif, 2006]

Fig. 4.4: Organization Chart of the algorithm Hybrid wrapper/filter-based ACO-SVM

Fig. 4.5: The final solutions proposed by the algorithm Hybrid wrapper/filter-based ACO-SVM

Fig. 5.1: Schematic representation of the principle of manufacturing of cement

Fig. 5.2 : installation of cooking

Fig. 5.3 : The flow of milk in the manufacturing process

Fig. 5.4 : The database BD1 of the system of sintering

Fig. 5.5 : The model multi-agents of the algorithm AntTreeStoch

Fig. 5.6 : Example of the shaft obtained by AntTreeStoch on the basis iris

Fig. 5.7 : 3D view of the base iris

Fig. 5.8: visualization of the four modes of malfunction of the first process using 3 parameters

Fig. 5.9 : 3D visualization of the classification tree of the base BD1 (500 knots) and the base BD2 (300 nodes)

Fig. 5.10 : configuration interface of LF

Fig. 5.11 : simulation model of the algorithm Lumer & Faieta

Fig. 5.12 : Result of the simulation of the algorithm Lumer & Faieta with (S>=5)

Fig. 5.13 : Simulation of the algorithm LF with and without ants quick (BD1)

Fig. 5.14 : Simulation of the algorithm LF with and without ants quick (BD2)

Fig. 5.15 : The Heuristic factor FH of the base BD

Fig. 5.16 : The Heuristic factor FH of the base vehicle

Fig. 5.17 : The Heuristic factor FH of the base RCK

Fig. 5.18 : The Heuristic factor FH of the base RCK

Fig. 5.19 : Result of classification of the database ex8a

Fig. 5.20 : Result of classification of the database Oven_CLASS

Fig. 5.21 : Result of classification of the database Heart_scale

Fig. 5.22 : Result of classification of the system database of pasteurization of milk

Fig. 5.23 : The values of FV in the last iteration (vehicle)

Fig. 5.24 : The best solution obtained in each iteration (BD2)

Fig. 5.25 : The value of FV obtained by each agent in the last iteration (RCK1)

Fig. 5.26 : The best result obtained at the end of each iteration (RCK1)

Fig. 5.27 : The best result obtained at the end of each iteration (RCK2)

List of Tables

Table 1.1 : Example of a data set of learning

Table 1.2 : the contingency table

Table 3.1 : the nuclei SVM the most used in the field of classification

Table 4.1 : general principle of a Genetic Algorithm

Table 5.1 : main parameters of part Sintering

Table 5.2: the different classes of Operation

Table 5.3 : the respectful operations of pasteurization

Table 5.4 : The key parameters of the process of pasteurization

Table 5.5 : Modes of Operations of the process of pasteurization

Table 5.6 : The parameters of AntTreeStoch

Table 5.7 : Results achieved by a sequential implementation and another parallel of the algorithm AntTreeStoch

Table 5.8 : the shaft obtained final by AntTreeStoch with and without the allow SID

Table 5.9 : The details of the databases used in the tests

Table 5.10 : The parameters of the algorithm of selection

Table 5.11 : The description of databases

Table 5.12 : Results obtained for the different values of torque (C, σ)

Table 5.13 : Entries of GATOOL

Table 5.14 : The performance of classification by using the different entries

Table 5.15 : description of the selected parameters

illustration not visible in this excerpt

General introduction

1 Context

This work has been carried out within the Team 3S (Surveillance - Supervision- security) of lap of the University of Batna in the framework of a project of type CNEPRU entitled "artificial intelligence, the methods bio-inspired, NICTS and the methods of emergences: tools for the supervision and the e-Maintenance of production systems" identified under the Code B*03520100014.

The objectives of our group are the proposal of methods and the development of software for the detection and location of faults in the systems of production by using the techniques of pattern recognition.

The technological development of the tools of production has led to the emergence of industrial systems very complex and dynamic. Any industrial system can go directly from a normal operating mode to another faulty, which corresponds to the alteration or the cessation of the ability of the system to perform a function of production. To avoid this problem, one can use a strategy of systematic maintenance by replacing the elements of the system according to a timetable. As this strategy requires a total or partial shutdown of the production system, it presents a major disadvantage, which is the reduction of the time of availability of the tools of production and in turn leads to a decrease in productivity. The ideal solution is to use a conditional maintenance that it applies only in the case of a malfunction. By using this type of maintenance, we can improve the performance of the system and limit the cost of maintenance. Nevertheless, with this type of maintenance, you should always use a diagnostic system, which indicates the state of operation at any moment.

We can distinguish two types of diagnostic methods. The first type includes the diagnostic methods based on analytical models and which uses the calculation of the gap between the settings and the magnitudes of the machine and those of the model to detect a possible failure. This action uses the algorithms of observation and comparison. These methods do not require making assumptions a priori on the different modes of failure that may suffer the system and offer the possibility of degradation of the performance of the system. The second type of diagnostic approach brings together the methods without models. These methods use the history of operation of the system and the experience of the experts. The basic principle of these methods is to extract information from the signals measured. We use this information for the implementation of the diagnostic system. The effectiveness of the diagnostic system is linked to the relevance of the indicators of faults retained and the Pretreatment of the data used. The pattern recognition is a method very known of this second type. It is the most suited to achieve the diagnosis of complex systems. The principle is to assign any new observation to a class that represents a mode of operation or malfunction of the system. There are several methods of pattern recognition. We have chosen as classification method, the ant colony algorithms because their simple principles. We have also used this type of algorithm to solve the problem of the selection of parameters, which represents the most important phase in the diagnostic process of complex and dynamic systems. These systems have a non-linear behavior and non­stable during their executions. Their modeling is very difficult by the mathematical approach since they have a huge number of variables, which characterize their states of operation.

The problem, to which this book proposes to respond, is the adaptation of ant colony algorithms to the specificities of data from industrial field in order to develop powerful diagnosis models that give results easily interpretable by the user.

Among ant colony algorithms, we are interested in the three algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. These algorithms may not contribute to solving the problem of diagnosis of complex, dynamic systems since they present a set of weaknesses.

- The major weakness of the algorithm AntTreeStoch lies in the initial sorting of data that requires a calculation time very important. Another disadvantage of this algorithm is the random choice of path, which leads to a bad data classification.
- The algorithm Lumer & Faieta is very expensive in terms of time since, according to its basic principle: ants can move objects several times even if they are well classified.
- The main disadvantage of the algorithm Binary ant colony is its great slowness since it requires several executions of a genetic algorithm for the initialization of pheromone on the network and other executions in each iteration.

In this framework, we have proposed to improve these three algorithms, on the one hand by increasing the speed of decision and on the other hand by upgrading the quality of classification.

2 Organization of the manuscript

This book, composed of five chapters, begins with a first chapter relating to the diagnosis by pattern recognition. It presents the basic concepts of the approach to diagnosis of complex and dynamic systems by the pattern recognition. The objective of this chapter is to put in Requirement the use of classification methods for the diagnosis of complex industrial systems and dynamic.

The second chapter is composed of two parts. The first part constitutes a bibliographical study on the algorithms of colony of ants. In the second part, we propose new implementations of a few algorithms that we have used for the development of a system of industrial diagnosis.

In the third chapter, we present a few basic concepts of support vector machines (SVM) which are hyperplane, margin and support vector. We will also highlight the use of SVM in the two cases where the data are (linearly or non-linearly separable). We finish by a presentation of a few algorithms for the classification SVM multi-classes.

The fourth chapter discusses the problem of selection of parameter and its position in the diagnostic process. It presents in detail the two approaches used for the realization of the stage in the reduction of the dimension of the vector of state who are the selection of parameters and the extraction of parameters. It also presents a new approach for the selection of parameters based on Support Vector Machines and algorithms of colony ants.

We finish the document with a dedicated chapter first to the study of two industrial processes: the system of sintering of the SCIMAT (Society of cements of Ain Touta) and the system of pasteurization of ula (unit of Dairy Aurès in Batna). In a second part, we present the results of the implementation of the proposed algorithms on these industrial systems and on a few databases.

3 Contributions of the book

The contributions of the book are located at the level of the following points:

- A state of the art on the industrial diagnosis by pattern recognition, the algorithms of colony of ants, the reduction of the dimension of vectors of state by the selection of parameters and the classification of data by the support vector machines.
- The proposal of a parallel version for the algorithm AntTreeStoch to minimize the influence of initial sort on the quality of classification. In order to minimize the size of the Data Tree classified. We have proposed a variant of this algorithm by adding a new parameter called Sid, which allows several ants to connect to the same position, and we have modified the movements of ants by promoting the path of the ant the most similar.
- The proposal of a variant of the algorithm Lumer & Faieta in creating several sub-groups of ants depending on the speed of travel. We have added a small memory to each ant to minimize the number of comparisons. We have also added to the Algorithm an Index to indicate a group of objects not homogeneous.

The originality of this work lies in the proposal of a new algorithm for the selection of parameters called "Hybrid wrapper/filter-based ACO-SVM ". The proposed algorithm combines the techniques of methods filters and enveloping in taking advantage of the rapidity of the report of Fisher and the adaptation of selected settings to the SVM classifier (the support vector machines).

This algorithm developed in the framework of our research work improves the quality of classification in function of the nature of the data in the learning base and the type of the function kernel used.

It also allows you to adjust the hyper-parameters of the function kernel.

Fault diagnosis by pattern recognition

Summary:

This chapter is devoted to the definition of the basic concepts of the diagnosis approach of complex and dynamic systems by the pattern recognition. We describe in a first time the position of the diagnosis by report to the monitoring function. We then treat the industrial diagnosis as a problem of pattern recognition. We conclude this chapter with a presentation of two artificial intelligence tools very used in the field of industrial diagnosis.

Chapter 1: Fault diagnosis by pattern recognition

1 Introduction:

The complex and dynamic systems have a non-linear behavior and non-stable during their executions. It is very difficult or even impossible to model this type of systems by the mathematical approach. This difficulty related to the large number of variables that characterize the state of operation of the system. Our work relates to the monitoring of this type of systems and more precisely the diagnostic phase. The main function of the monitoring is to verify the operational status of the system. It is composed of two parts, which are the detection and diagnosis, (Figure 1.1). The detection phase allows you to determine the status of the system as normal or abnormal. The diagnostic phase is to identify the failed components and to find the causes from a set of symptoms observed.

illustration not visible in this excerpt

Fig. 1.1 : diagnostic position in relation to the supervision From a general point of view, the determination of the causes from effects is a problem of classification. It compares a data vector that represents the current state of the system with vectors references that represent the history of system operation.

Several researchers have shown the interest of the pattern recognition in different areas where the systems are of a complex nature such that the image recognition, handwriting recognition, medical diagnosis, the industrial diagnosis, etc. [Fasel and Luettin, 2003] [Plotz and Fink, 2009] [Mahdaoui et. Mouss, 2008] [Dazzi et al ., 2001].

There are several advantages that make the use of the recognition of interesting forms such as the speed of calculation for the recognition of a new information, the ability to deal with uncertain data and imprecise. Moreover, the pattern recognition allows to detect and to follow the evolution of system operation [Ondel et al ., 2006].

2 Definitions

This section presents the terminology used in the literature and in particular that relating to our work [Mouss, 2006].

> The supervision is a task control and supervision of the execution of an operation or work carried out by other people without to go into the details of this execution.

> Monitoring is a task contained in real time, the purpose of which is to characterize the mode of operation of the physical system by recording information, recognizing and indicating the anomalies of behavior.

> The maintenance is a set of technical actions and the corresponding administrative, including operations designed to maintain or restore an entity in a specified state or in conditions of safety of operation allowing him to perform a required function.

> The diagnosis is a function designed to provide information on the abnormalities within a physical system. Are traditionally distinguished several diagnostic levels: detection, localization and identification of anomalies.

> The detection is the first level of diagnosis consisting to determine quickly and reliably the existence of an anomaly.

> The identification is a diagnostic level consisting to characterize precisely the anomalies, which are produced.

> Localization is a diagnostic level, triggered by a detection procedure and consisting in determining the occurred anomalies.

> The failure is the alteration or the cessation of the ability of an entity to perform a required function. The failure is a passage of an entity to a state of normal operation to an abnormal condition or failure.

> The default is the difference between a real characteristic of an entity and the desired characteristic. This gap exceeding the limits of acceptability.

> The failure is the inability of an entity to perform a required function or to ensure the appropriate service because of a failure.

> The pattern recognition is a science which brings together all of the algorithms or methods allowing the classification of objects or forms by comparing them to forms- types.

> A vector shape is composed of a set of parameters that may be digital or symbolic.

> A prototype is the set of specific values of the parameters that characterize a particular situation.

> A class is the possible set of values of the attributes.

> The classification of an object is the decision to assign the object to a particular class.

3 The different approaches to the diagnosis

There are two types of diagnostic approaches [Adrot, 2000]. The first met the diagnostic methods based on analytical models. To detect a possible failure, these methods use a calculation of the difference between the settings and the magnitudes of the machine and those of the model. These methods use the algorithms of observation and comparison. The second type of diagnostic approach brings together the methods without models. We use these methods when we cannot build a functional model of the system because of the total lack of analytical knowledge and structural aspects of the system. In this case, we build a model using the history of operation of the system and the experience of the experts. The basic principle of these methods is to extract information from the signals measured. We use this information for the implementation of the diagnostic system. The effectiveness of the diagnostic system depends to the relevance of the indicators of faults retained and the Pretreatment of the data used.

We can find an organization of diagnostic methods, which allows having a general vision in [Narvaez, 2007]. Figure 1.2. Presents its classification of diagnostic methods.

illustration not visible in this excerpt

Fig. 1.2 : Classification of Diagnostic Methods [Narvaez, 2007]

Figure 1.2. Shows that there are several methods to construct a diagnostic module of an industrial system. The method used depends on the way of data representation of the system as well as the degree of knowledge of the expert on the various modes of operation and malfunction of the system.

We use mathematical model [Adrot, 2000] for the diagnosis if the data are of digital nature and it has a mathematical model of the system. The mathematical model compares the values of the observations with thresholds to detect a state of malfunction. The major advantage of these methods is the ability to detect faults through a trend analysis of signals as well the ability to give an exact location of the failed component. Nevertheless, these methods suffer from several disadvantages, we can cite among others: the need to have a complete knowledge on the various modes of operation and malfunction of the system. They are also very sensitive to errors in modeling.

If the information is of symbolic nature the approach by expert system is preferred [Mouss, 2006]. This type of approach (quantitative) is adapted to the problems which require the treatment of a large quantity of heterogeneous data and/or contextual. It uses an approach based on the pattern recognition [Mahdaoui and Mouss, 2008] Si you cannot build a mathematical model representing the different modes of operation and malfunction of the system studied or if this model is very complex, which makes its use, is impossible.

Methods based on the pattern recognition allow the development of a diagnostic system even if the system studied is complex (Figure 1.3).

illustration not visible in this excerpt

Fig. 1.3 The position of pattern recognition compared to diagnostic methods

For the systems studied in the framework of this book, we believe that an approach based on the pattern recognition is the more appropriate since the two processes studied in the framework of this book represent dynamic systems and complex [Kadri et al ., 2012a].

4 Fault finding by pattern recognition

4.1 The basic principle

The diagnosis of an industrial system seeks to identify in a permanent way the operating status of the system. We can define it as a pattern recognition problem. It is for the purpose of recognition of a form among several possibilities from observations too noisy. We can distinguish between two approaches [Mokhtari, 2007]:

- The recognition of digital forms (statistics) which operates modeling probabilistic forms or fuzzy;
- The recognition of structural form or parser, which operates the relations between the components of the form.

In statistics Pattern recognition, each operating state (observation) is represented by a vector form x of parameters such asX = (x1( ..., X^). The vector x is represented by a point in a space of representation called IR. The parameters xi of the vector shape x reflect the state of the studied system. They are from the treatments carried out on the signals provided by the sensors implanted (vibration, speed, currents, etc.). For each new observation, we have to decide among M class that correspond to areas in the space of representation, grouping the similar forms.

Figure 1.4 shows that a system of pattern recognition is composed of five main steps: data acquisition, generation of characteristics, and extraction/selection of settings, classification and assessment of the system.

illustration not visible in this excerpt

Fig. 1.4: The process of the pattern recognition [Saint-Jean, 2001]

4.2 Analysis of Observations:

In an application, if the acquired data are in the form of signals, then it is essential to extract numeric settings. These settings, which moreover constitute the vector form, must be able to describe the operation of the system. In this phase, it must also define the different classes that represent the operating modes of the system studied. Therefore, one gets to the end of this phase: a set of N combined entities in M classes that represents the data of learning. We can write this set as the matrix form (Table 1.1).

Table 1.1 : Example of a data set of learning

illustration not visible in this excerpt

The analysis phase is very costly in terms of computational time and requires knowledge of several information concerning the system studied to be able to select the appropriate settings and optimal for a better diagnosis.

4.3 Reduction of the dimension of the space of representation

The effectiveness of a system of diagnosis depends on the relevance of the chosen settings. It is therefore essential to find the settings that allow you to distinguish between the various modes of operation and malfunction of the system.

The use of a huge number of parameters influence on the speed of identification. It is also possible to identify the different states of operation of a system using a reduced set of parameters [Kadri et al., 2010a].

We must use a reduction method in order to minimize the size of the vector of state and retain only the parameters deemed relevant. This method allows you to reduce the time of identification of a new observation and to avoid performance degradation of the diagnostic system caused by the use of a huge number of parameters.

There are two types of methods of reducing dimension: The first is to define a new set of parameters. They constitute combinations linear or non-linear of elders. We call them methods of extraction [Duda, 2001]. The most used are the principal component analysis (ACP) [Sophian et al., 2003], the discriminant analysis (AFD) [Lebart et al., 2000], the network of vector quantification and projection (VQP) [Mathieu et al., 2000] and the projection of Sammon [Jain and Mao, 1994].

The second type of methods for the reduction of dimension is to select a subset of the settings from the original set. The new assembly must offer a minimization of the variance intra-class and a maximization of the variance of means. We call them selection methods. Any method of selection of settings is in four essential points [Al-Ani, 2005] :

- A set of departure that represents the set of parameters that were initially used by a procedure of research. This set may be empty, or contains all settings or a random subset.
- The search procedure is the essential element of any method of selection. It returns as a result the subset of parameters that better respond to the criterion of quality. This criterion is used by an evaluation function.
- The evaluation function determines the quality of classification obtained using a sub­set of parameter.
- A criterion of judgment is used to complete the search procedure. This criterion depends either on the evaluation function or well to the configuration settings that are defined by the user.

The selection of m attributes among n represents a problem of combinatorial explosion. There are several approaches to avoid an exhaustive search. Among the first work in this field have been those of Narendra and Fukunaga [Narendra and Fukunaga, 1977], they have used the algorithm branch and bound which requires the monotony of the criterion of assessment employee. This means that the addition of a new parameter to the current subset should not decrease the value of the criterion. However, in reality, this condition is not verified by most of the criteria used in the context of selection of variables. Despite the different variants proposed in this algorithm, they are not applicable before the problems where the number of parameters is very high.

There is another category of algorithms called wrappers [Guyon and Elisseeff, 2003]. The two famous algorithms of this category are sequential forward selection (SFS) [Blum and Langley, 1997] and sequential backward selection (SBS) [Kittler, 1978]. The point of departure of these algorithms is respectively an empty set or complete settings. Initially, we evaluate all parameters individually and we select the one that gives the best performance of classification. We repeat this iteration until the quality of classification does not increase. The difference between the two Algorithms lies in the fact that in the first, it adds a parameter to each iteration by against it removes a parameter in the second. The major disadvantage of this category is the strong dependency between the result and the initial order of the parameters. Yang and Honavar [Yang and Honavar, 1998] have applied the Genetic Algorithms for the selection of parameters. We generate the initial population randomly and it represents a set of possible solutions. We use the fitness as a criterion for the assessment. The passage from one generation to another is achieved by applying the selection, the crossing and the mutation. The purpose of these transformations is to converge toward an optimal solution. The genetic algorithms have given of the results more interesting than the previous methods. We propose in the second chapter a new method for the selection of parameters based on a variant of the algorithm of the colony of ants (ACO) [Weiqing et al., 2006]. We show by the result that the results obtained by this approach are comparable to the results obtained by applying the genetic algorithms.

4.4 The space of decision:

In this phase, it partitions the data of learning in groups by using a method of classification. In a second step, it establishes a relationship between these groups and the different states of operation of the system. Therefore, the purpose of this phase is to find an assembly in classes of the whole of the data by the establishment of classification procedures. The classification is called exclusive when the partitioning of objects leads to associate each object to one and only one class. The methods of this category is broken down into two parts: the hierarchical methods (ascending or descending) and methods of classification by partition by against, a classification non-exclusive admits that an object belongs to several classes. On another axis, we can also distinguish the supervised classification where the objects have an information that indicates their dissimilarities. In unsupervised classification, it only uses a matrix of dissimilarity. No information on the class of an object is provided to the method [Kadri et al., 2010a].

4.4.1 The hierarchical classification:

This type of classification is based on the calculation of distances between the information to build the classes. At the end of the classification procedure, one obtains a tree whose information of each node correspond to a record in the learning base. To perform a partition, cut this shaft to a given level (each level of cut-off corresponds to a partition). The algorithm of upward classification proceeds by successive mergers of elements, while the downward Classification performs in an inverse manner. It considers the basis of learning as a class overall, and the splits into two sub-classes. The Division operates in such a way that the distance between the two sub classes or the largest possible, to create two classes well separated. In practice, these algorithms pose difficulties computational for the databases in large. However, we propose in the next chapter improvements for an algorithm of this type to make it faster and usable in line [Chen et al., 2002].

4.4.2 The classification by partition:

In this type of classification, a class represents an area of space that brings together the comments similar. The classification by partition is usually achieved through methods of coalescence [Azzag et al., 2004]. The result of the execution of these methods is a set of classes to which the similarity to the inside of the same class is high but low between the different classes. In a first step, these methods will calculate the centers of gravity of classes and then they affect each observation at the center of gravity the closest.

4.5 The procedure of decision:

At the end of the previous step, one obtains a set of structured learning in groups separated. The observations are affected in such a way that each of them belongs to one of the M classes. In several cases, the allocation must not be systematic because errors can be committed. The procedure of decision must propose several solutions to minimize the risk of poor identification. These solutions are returned by the releases options of ambiguity and distance (Figure 1.5) [Mokhtari, 2007]. The rejection of ambiguity is to filter the observations ambiguous. It affects the observations, which are located in this situation to a new fictitious class called Class of rejection of ambiguity. While the rejection of distance concerns the outliers that do not correspond to any of the classes in the system. The observations in this situation are assigned to a new fictitious class called Class of rejection of ambiguity.

The possibility to take into account new observations not encountered before allows you to minimize the risk of poor classification and represents an alternative necessary in diagnostic (Figure 1.5).

illustration not visible in this excerpt

illustration not visible in this excerpt

Fig. 1.5: Fault finding by pattern recognition [Mokhtari, 2007]

In addition, we can write the rules of decisions for a new observation as follows:

- If an observation X is identified then classify X in the corresponding class.
- If X is rej ected in ambiguity then add X to the class rej ection by ambiguity ( W0).
- If X is rejected in distance then add X to the class rejection of distance (W1).

There are two types of approach for the implementation of decision procedures [Lebart et al., 2000]. The first is to use statistical methods, parametric or non-parametric. The second type, analytical, is to focus on the calculation of the borders of decision between classes.

4.5.1 Parametric methods:

In the parametric methods, the law of probability which organizes the distribution of the observations in the classes has a parametric form known and the problem then becomes one of
the estimation of a parameter vector from observations from each density f(x/wi) who represents the law of probability followed by a vector x in a class wi. The a priori probability Pr(Wi) of each class WI is known and checking the condition of Orthogonality:

illustration not visible in this excerpt

Among these methods, the Bayesian method is the most common [Lewis and Ringuette, 1994].

4.5.1.1 The Bayesian method:

It uses the formula of Bayes to calculate the probability in posteriori. It defines that a vector x belongs to a class wi, as follows:

The decision rule of Bayes, concerning class wi of a vector X, is written as follows:

illustration not visible in this excerpt

The decision rule of Bayes can include the rejection in ambiguity and this by using a constant C. Therefore, the previous formula becomes:

illustration not visible in this excerpt

Knowing that the rejection in ambiguity is possible for:

illustration not visible in this excerpt

The decision rule of Bayes can also include the rejection of distance using the notion of density of mixing recorded f (x). The function f (x) must be less than a threshold density C1. Also 1.4 becomes:

illustration not visible in this excerpt

4.5.2 Non-parametric methods:

In the case where the density f(x/WI) is unknown it uses non-parametric methods [Ondel, 2006].For the estimate. There are several methods of estimation not parametric. In this section,
we will present the methodology of the nuclei of Perzen [Parzen, 1962] and the method of k- nearest neighbors (k-PPV) [Fukunaga, 1990] [Ondel, 2006].

4.5.2.1 The method of the nuclei of Parzen:

Parzen and Rozenblatt [Haykin, 1999] have introduced the concept of kernel function φ to be able to estimate the density function f(x/wi) based on a set of learning (X1, ..., Xn). The only constraint is that the kernel function must be positive and standardized.

The estimate of the density in a point x is to calculate the proportion of observations (X1, ..., Xn) which are located near x. For this, we built a volume v(x), which is defined by a distance. This distance depends on the distribution of the points of learning, and having as the center point x. It then replaces the volume v(x) by a bell curve centered in x. More than a point is near the point of support x more the bell curve will give him a great value. Conversely, the points are too distant from X are assigned a small value. The estimator is formed by the sum of the bell curves. Figure 1.6. represents an estimate by the method of the nucleus of a sample of 60 random numbers are distributed according to the normal law for different values of the window.

illustration not visible in this excerpt

Fig. 1.6 : the estimation method by kernel

4.5.2.2 The method of k nearest neighbors

The method of k-nearest neighbors (K-PPV) is the most common and the most simple among the non-parametric methods. It has been introduced by Fix and Hodges [Fukunaga, 1990]. K- PPV is a method of supervised classification. It uses a set of learning (X1, ..., Xn) composed of n observations each of which is labeled with one of the M known classes. To identify a new observation x to a class, K-PPV calculates the distance between x and the n observations of the set of learning and seeks its k nearest neighbors. Therefore, this new observation, x is assigned to the class that contains its k nearest neighbors.

To calculate the distance between two vectors x andy, the general formula is defined as follows:

illustration not visible in this excerpt

The distance of the most popular is the Euclidean distance : d(x, y) = VSQ

The choice of the number of neighbors K is a delicate problem. Take a large number of neighbors allows you to reduce the noise (the observations that do not reflect the reality) since one outlier selected as the closest neighbor will have an influence tempered by other comments. On the other hand, if the number of neighbors is reduced this leads to a greater sensitivity to the specificities of each class since it takes less account of the average of observations of a class.

A good compromise to avoid the problem of ambiguity of indecision is to choosek = Vñ.

Despite that, the method k-PPV does not require a time for the calculation of parameters, it has the disadvantage of long time to find the k- nearest neighbors. To decrease the calculation time, we can use a space of dimension smaller or well structure data in the form of a shaft, which accelerates the research, and recess before you begin the execution of procedure of research, it is necessary to analyze the whole learning to eliminate the redundant elements.

4.5.3 Direct calculation of borders:

The objective of the method of direct calculation of borders is to define the mathematical formulas that allow a better separation of classes. The choice of the border the most appropriate depends on the complexity of the border of decision, in other words of the good or bad separation of classes of the set of learning. The main advantage of this approach is that it uses only the equations in the decision phase (and not the whole of learning). However, in reality, it may not always separate the classes linearly to causes of forms of classes that are not linearly separable. Then mathematical functions more complex are used to resolve this problem. Sometimes, we should ignore the data of the set of learning to be able to define the equation of the borders [Casimir, 2003].

Where the function of linear discrimination has the following form:

illustration not visible in this excerpt

Where w is a vector of the weight, wo is the bias, and F is a function that converts the result in the desired output.

It uses a hyperplane to define the boundaries of Linear separation between two classes wi and WJ. This hyperplane has for Equation:

illustration not visible in this excerpt

That we can write in the following form :

illustration not visible in this excerpt

To define the equation of a hyperplane which separates two classes wi and WJ, it must find a vector of weight and fix a threshold. There are two types of method to calculate the parameters of the VECTOR weight. The first type is based on the conditional probability. An example of a method of this type is the linear discriminator of Fisher [Duda, 2001]. The second type is based on the analysis by discriminant model [Duda, 2001]. It cites as an example of a method of this type, the machine Support Vector (SVM) [Youn, 2010]. The latter will be presented in detail in the third chapter.

4.5.3.1 Linear discriminator of Fisher

The linear discriminator of Fisher [Youn, 2010].is based on the maximization of the criterion of Fisher. It allows assessing the relevance of each parameter and calculates the quality of classification done by using a single parameter. The formula of this criterion for the case of two classes is as follows:

Knowing that:

M represents the number of classes.

Mc(a) represents the center of gravity of the Class C in considering only the parameter has. It is calculated as follows:

illustration not visible in this excerpt

Where Xcv is the Vector V of the Class C. The value of N equal to the number of vectors of the same class.

s2 (a ) Is the variance of parameter has vectors of the Class C :

illustration not visible in this excerpt

The criterion of Fisher takes into consideration the intraclass variance and the variance of means s [Duda, 2001]. The objective of linear discriminator of Fisher is to find a vector v for which the next criterion maximum:

illustration not visible in this excerpt

The optimum value for this criterion is obtained by cancelling its gradient. Therefore, the value of X is obtained by:

illustration not visible in this excerpt

Knowing that the matrix of variance intraclass is calculated as follows:

illustration not visible in this excerpt

While the matrix of the variance of means represents the separation between the different classes is calculated as follows:

illustration not visible in this excerpt

With:

- M : center of gravity General

- M : number of classes

- Mc : center of gravity of the class number C

- Xcv : the 6th vector of the class number C

- Nc: number of vectors of the class number C

- N : total number of vectors

The major disadvantage of this type of approach is that the equations of borders depend on the set of learning [Casimir, 2003]. This means, if this set does not contain all of the possible cases then the equations do not allow for the identification of a new observation. In addition, it is necessary to update the coefficients of the equations for what is able to recognize this new observation.

4.6 Operating Phase 4.6.1 The evaluation criteria

To assess the performance of the diagnostic system developed, it is necessary to measure the quality of the classifier employee using a criterion of assessment. Another objective to use a criterion of assessment is to adjust the settings of classifier [Kadri et al., 2012B]. To carry out this evaluation, we use another set of observations called set of test. We know in advance the class of each observation of the entire test. In the literature, there are several criteria for evaluation. These criteria are based on a table of contingency [Rezki et al., 2011] [Azzag et al., 2004]. We use a table of a binary classification to illustrate the calculation of the different criteria that will follow:

Table 1.2 : the contingency table

illustration not visible in this excerpt

It defines using statistics from this table the following criteria:

illustration not visible in this excerpt

The accuracy represents the number of correct assignments on the total number of assignments, while the Reminder represents the number of correct assignments on the number of assignments, which would have had to be made. The accuracy and the error measures are the most used in pattern recognition. They are based in their calculations on the total number of observations. These last two measures are not adapted to the problems where the classifier is unable to classify the data and therefore it sends a large number of observations to the class of rejection. Such a classifier returns a low error rate and a good accuracy. That is why it is not interesting to use these two measures in the learning phase. To calculate these measures in the case of multi-classes, it seeks the measurement for each class, then calculates the average to find the overall measure [Azzag et al., 2004].

4.6.2 The methods of evaluation

These evaluation criteria are exploited to estimate the error of classification by using one of the following methods: the re-substitution, the holdout method, the D-Cross-validation, method of the leave-one-out and the method of re-sampling [Kalakech, 2011].

The method of re-substitution is a method easy and quick to implement. In this method, it uses the same set for learning and for the test.

In contrast, in the holdout method, these two sets are not identical. They are built randomly. The method of D-Cross-validation is the use of sets of the same size. It uses (D-1) sets for learning and it uses the last set to do the tests. The method of the leave-one-out is a particular case of the previous method for which d = N. knowing that here, Nrepresents the total number of observations.

The method of re-sampling uses for learning a set of size N equal to the total number of observations. The set of learning may contain the same item several times. For the test, it uses all the comments [Kalakech, 2011].

5 Artificial Intelligence tools for the industrial diagnosis

Today, conventional techniques are little used in the field of industrial diagnosis. Researchers are oriented toward the use of techniques of artificial intelligence [Kadri et al., 2012b]. These techniques can be applied to different types of industrial system (simple, complex, or dynamic). They have a great capacity for learning even with a limited set of data. Among these methods, we can cite the networks neurons and the fuzzy logic [Mahdaoui and Mouss, 2008].

5.1 Networks of Neurons

The idea of neural networks comes originally from biological modeling of the Neuron [Mahdaoui and Mouss, 2008]. They use of formal neurons based on a mathematical representation. They show large capacities to solve the problems of classification for the diagnosis of complex systems. They have a great capacity for learning and generalization of their knowledge to observations unknown. In practice, the networks of neurons can be used as a decision rule in a process of automation of the operation of the diagnosis.

The Neural networks are structured in several interconnected layers (a layer of entry, one or several hidden layers and a layer of output). The input layer corresponds to the observations. Therefore, the number of neurons of this layer is equal to the number of parameters of the vector shape. The hidden layers include prototypes of classes. The number of neurons in the output layer corresponds to the number of classes known. One result of this layer represents the degree of belonging to an observation to a class.

illustration not visible in this excerpt

Knowing that:

Yi(t) : is the internal state of the neuron i;

Iu(t) : is the state of the output or the activation of output of the neuron i; Wij : are the weight of interconnections between neurons.

For each new observation presented to the entry of a neural network, the degrees of membership of this observation are assessed in relation to different prototypes of classes. In the operational phase, the network is capable of classifying new observations. The observations not classified are rejected by ambiguity.

In several applications of industrial diagnosis [Mahdaoui and Mouss, 2008], the learning may be long and difficult and gourmand in time of calculation. In addition, it must be carried out on all the data at once, with the risk that the network does not retain the previous experiences. It is necessary to possess a variety of data, which reflect all conditions of operation and malfunction of the system studied. The Neural networks will not work properly necessarily outside the data space of learning. Once resulted, a network of neuron is not flexible. If new data are involved, it must rebuild a new learning.

5.2 The fuzzy logic

The fuzzy logic is one of the techniques of artificial intelligence the most well known. Lotfi Zadeh has proposed it in 1965. It is used in several areas of research and application [Mahdaoui and Mouss, 2008]; we can mention the automatism, robotics, financial management, medicine, the diagnosis and many others.

In the classical logic, a variable, which refers to the membership or not of an element to a set, cannot take that the value 0 or 1. This logic does not deal with cases where there manipulates data vague, imprecise, contradictory or in the case where the boundaries between the different classes are poorly defined. By contrast, in the fuzzy logic, the variables are of type inaccurate and take real values between 0 and 1.

In a system of diagnostics, it models the knowledge of an expert on the system studied in the form of a set of rules [Mouss, 2006]. Therefore, the construction of a diagnostic system blur consists of two phases: the acquisition of data and the design of rules. The rules express the relationship between the symptoms and the causes. They allow locating the faulty component. In general, a rule has the following form:

If Condition Then Conclusion

The condition contains all the symptoms observed by the expert and the conclusion represents the faulty component indicated by the expert. Then, the classification of the data by this technique is carried out by the identification of common properties that characterize the set of elements of the same class. The operation of a fuzzy system decomposes in Fuzzification, inference and Defuzzification [Mahdaoui and Mouss, 2008].

The phase of Fuzzification is to determine the degree of affiliation of each observation to a state of operation or malfunction. This phase is carried out using features of allegiances, which consist of three to seven states blurry.

The degrees of belonging allow you to apply the rules blurred. Therefore, the fuzzy logic offers rigorous formalisms to infer the degree of affiliation of output variables.

The phase of Defuzzification allows finding from the data blurred accurate data. This phase is carried out either by the selection of large degree of belonging, either by the calculation of the weighted average or well by the determination of the center of gravity of the values obtained.

The fuzzy logic allows you to give a global vision and more complete information on the current state of the complex system indicating the degree of affiliation. It helps for example the expert to take decisions to prevent a failover to a state of failure in the case where the system operates normally, but presents a degree of high membership to a state of failure. The major drawback to use fuzzy rules is the generation of a large number of errors. Therefore to put in place a diagnostic system blur, you need to redo the test phase several times to reduce the rate of errors [meunier et al., 1996].

6 Conclusion:

The objective of this chapter is to define a few concepts that we will use in the result of this book. In a first time, we have presented the general architecture of a diagnostic system based on the pattern recognition. Then, a few techniques used in the field of industrial diagnosis have been presented and discussed. It was: the Bayesian method, the method of the nuclei of Perzen, the method of kppv, the method of direct calculation of borders, linear discriminator of Fisher, RDN and the LF. The choice of a method of pattern recognition is done after an evaluation of classifier and a data processing and in the nature of the system.

Finally, we can say that the tools of the IA are more adapted to the complex and dynamic systems since in this type of system, the information gathered are both large, imprecise and uncertain.

In the next chapter, we will complete the study of smart technologies in presenting the algorithms of the colony of ants inspired of the behavior of actual ants. A colony of ants is able to solve complex problems despite that each ant of the colony has of cognitive capacities limited. This behavior is called collective intelligence.

We are going to explain by the following the use of these algorithms in each diagnostic phase as a method of optimization or as a method of classification.

illustration not visible in this excerpt

Summary :

Ant colony optimization were initially introduced by Dorigo in 1992 in his doctoral book. The objective of this chapter is to introduce the three algorithms of colonies of ants that we have used in the framework of this book. In the first part of this chapter, we are going to present a bibliographical study on the algorithms of colony of ants. In the second part, we propose new implementations of a few algorithms that we have used for the development of a system of industrial diagnosis.

Chapter 2: Ant Colony Optimization

1 Introduction:

In recent years, several meta-heuristics have shown their effectiveness for the resolution of complex problems, the set to which belongs the problem of industrial diagnosis. Social insects such as ants are renowned for their ability to achieve collective tasks relatively complex. We quote for example the constructions of nests, the sorting of the brood or self-distribution of tasks in the population [Dorigo, 1992]. These achievements are generally regarded as the result of a collective work since individuals are unable to apprehend the structures in question and yet the community succeeds in completing them. This capacity is appointed collective intelligence.

2 The actual ants

2.1 Basic Principle

In a first time, the ants walk randomly assigned to nest to the food source and vice versa. Each ant removal on its path a chemical substance called pheromone which the guide to return.

In this last case, the nature of pheromone is different since it contains information on the quality of the food source. Subsequently, the pheromone encourages other ants to choose this path. The ants have ability to follow the path that contains the largest amount of pheromones. In other words, when an Ant is located in front of several paths, it can find the shortest path thanks to the density of pheromone. This behavior allows the colony to find the shortest path between the nest and the food source (Figure 2.1) [Colorni et al., 1991].

2.1.1 The resolution of complex problems

The cooperation between the members of the anthill allows the realization of complex activities that is what we are going to explain to the help of the example of the double binary bridge (Figure 2.2) [Deneubourg et al., 1990]:

illustration not visible in this excerpt

Initially, the bridges are empty and does not contain pheromone. Each path has the same probability of being selected. The ants will randomly choose a path among the possible paths. The shortest path will be marked quickly by more of pheromone since the ants who take this path file more of pheromone for a same period that the other ants who choose another path.

2.1.2 Stigmergy

The stigmergy is a method of indirect communication in an emerging environment self- organized, where individuals can communicate between them by modifying their environment. This indirect communication generates a complex behavior of the Whole from individual behavior simple.

3 The artificial ants

Based on the social behavior of ants presented in the previous section, a variety of algorithms for optimization, called OCF ("optimisation by colony of ants") have been developed. These algorithms are relatively simple and effective. Each ant has a set of functions that define its behavior. It is very similar to that of the actual Ant when it search for food (Figure 2.2). The artificial ant moves in its environment which is a combinatorial space. This space is composed of a set of objects obtained by a modeling of the system studied. The displacement of each ant represent a solution to the problem. The solutions found are evaluated using a function that determines the best solution [Ouadfel and Batouche, 2007].

4 Multi-agent systems

4.1 The principles of databases

The theme of the systems multi agents (ADM) is a field of research very active. Currently, this discipline is at the connection of several areas in particular: artificial intelligence, distributed computing systems and software engineering. She is interested in collective behaviors products by the interactions of several autonomous entities and hoses called agents.

4.2 The agents

According to Ferber, an agent is an autonomous entity, actual or abstract, which is capable of acting on itself and on its environment, which, in a multi-agent universe, can communicate with other agents, and whose behavior is the result of its comments, of its knowledge and interactions with other agents [Ferber, 1995]. From this definition, it defines a software agent as a computer entity that has the following characteristics [Deneubourg et al., 1990]:

1. Its environment is a computer system opened that can contain a set of applications, networks and heterogeneous systems;
2. Has the means which allows him to communicate with other agents;
3. Is driven by a set of specific objectives;
4. Has the own resources;
5. Has only a partial representation of the other agents;
6. Has skills (services) that it can provide to other agents;

From these characteristics, we can say that each virtual ant represents an agent capable of acting and communicating. The virtual ant modifies the values of pheromone associated with different elements and it uses the pheromone as a medium of indirect communication. Therefore, the result of his behavior is the change in its environment and by consequence, the choice of other ants will be changed to the next iteration.

You can also say, that each virtual ant represents a gifted agent of autonomy. The purpose of a virtual ant is to find a solution to the problem treaty. It applies a set of rules and uses the pheromone and values heuristics to achieve its objective. as the agents, the virtual ants have only a partial perception of the environment and have not aware of all of the elements that influence the colony. We can classify the virtual ants as reactive agents because they cannot acquire new knowledge and merely to act/react following the information they find in their environment.

To complete this small presentation of agent, it is necessary to define what is called a multi­agents system (ADM). An ADM is a system which includes a set of agents, located in a certain environment and interacting according to certain relations. According to Ferber [Ferber, 1995], an ADM is usually consisting of the following elements :

- A combinatorial space e which represents its environment.
- A set of objects O. which each object has a position in the environment E. These objects represent resources for agents. Therefore, agents can manipulate these objects to achieve their objectives.
- A set of agents, who represent the active entities of the system.
- A set R of the relations that can exist between the agents.
- A set of operators Op. allowing agents to collect, produce, consume, transform and manipulate objects of O.

- Of the operators responsible to represent the application of these operations and the reaction of the world to this attempt to modification, that could be called "the laws of the universe."

4.3 Architecture of agents

The architectures of agents address the issues of design and create computerized systems that satisfy the properties of the agent such as: the autonomy, the reactivity, proactivity and social capacities.

Wooldridge and Jennings [Wooldridge and Jennings, 1995] indicate that the architectures of agent can be seen as of technology models of the programming of agents. They identify the following classes of the architectures of the agent.

- The cognitive architectures (deliberative or).
- The Reactive architectures (or behavioral).
- The hybrid architectures.

4.4 The Reactive architectures:

The Reactive Agents are often qualified not to be smart by themselves. They are the components that are very simple who perceive the environment and are able to act on it. They do not have a symbolic representation of the environment or of knowledge and they do not have beliefs, no mechanisms for sending messages. Their capacity to respond only to the Stimulus mode/action (Figure 2.3) [Ferber, 1995].

[...]

Final del extracto de 140 páginas

Detalles

Título
Ant Colony Algorithm in Fault Diagnosis
Autores
Año
2016
Páginas
140
No. de catálogo
V344694
ISBN (Ebook)
9783668350038
ISBN (Libro)
9783668350045
Tamaño de fichero
2578 KB
Idioma
Alemán
Palabras clave
colony, algorithm, fault, diagnosis
Citar trabajo
Kadri Ouahab (Autor)Adel Abdelhadi (Autor), 2016, Ant Colony Algorithm in Fault Diagnosis, Múnich, GRIN Verlag, https://www.grin.com/document/344694

Comentarios

  • No hay comentarios todavía.
Leer eBook
Título: Ant Colony Algorithm in Fault Diagnosis



Cargar textos

Sus trabajos académicos / tesis:

- Publicación como eBook y libro impreso
- Honorarios altos para las ventas
- Totalmente gratuito y con ISBN
- Le llevará solo 5 minutos
- Cada trabajo encuentra lectores

Así es como funciona