Excerpt

## TABLE OF CONTENTS

Title Page

Declaration

Certification

Acknowledgement

Dedication

Abstract

Table of Contents

List of Tables

List of Figures

**CHAPTER ONE: INTRODUCTION**

1.0 Preamble

1.1 Optimization

1.2 Classification of Optimization Problems

1.2.1 Classification Based on the Existence of Constraints

1.2.2 Classification Based on the Nature of Equations Involved

1.2.3 Classification Based on the Permissible Values of the Design Variable

1.2.4 Classification Based on the Deterministic Nature of the Variable Involved

1.2.5 Classification Based on the Number of Objective Functions

1.3 Optimal Control Problems

1.3.1 Continuous Optimal Control Problems

1.3.2 Discrete Optimal Control Problems

1.4 Regulator Problems

1.5 Application of Optimization

1.6 General Procedures for Solving Optimization Problems

1.7 Aim and Objectives of the Research

1.8 Motivation

1.9 Scope and Limitation of the Research

**CHAPTER TWO: REVIEW OF RELATED LITERATURES**

2.1 Introduction

2.2 Highlight of Some Numerical Optimization Methods

2.2.1 Introduction

2.2.2 Parametric Optimization: Control Parameterization

2.2.3 Riccati Equations

2.2.4 Shooting Methods

2.2.5 Newton’s Method

2.2.6 Sequential Quadratic Programming

2.2.7 Constraint Handling Techniques

2.2.8 Gradient Methods

2.2.9 The Extended Conjugate Gradient Method Algorithm

2.2.10 The Continuous Case of the Extended Conjugate Gradient Method Algorithm

2.2.11 The Discrete Case of the Extended Conjugate Gradient Method Algorithm

**CHAPTER THREE: METHODOLOGY**

3.0 Introducti on

3.1 Derivation of Euler’s Method

3.2 Necessary Condition for an Optimal Control Problem

3.3 Necessary Condition for a General Optimal Control Problem with *n* Equality Constraints

3.4 Necessary Condition for a General Optimal Control Problem with Mixed Constraints

**CHAPTER FOUR: RESULTS AND DISCUSSION**

4.0 Introduction

4.1 Mathematical Computation of Euler’s Method

4.2 Algorithm for Euler’s Method Approach for Solving Optimal Control Problems

4.3 Computational Results

4.4 Discussion of the Results

4.5 Generalization of Euler Lagrange Method for Solving General Form of Continuous Time Linear Regulator Problems

**CHAPTER FIVE: CONCLUSIONS AND RECOMMENDATIONS**

5.1 Conclusion

5.2 Recommendations

5.3 C ontribution to Knowledge

**REFERENCES**

**APPENDICES**

## ACKNOWLEDGEMENTS

I would like to express my appreciation to God for seeing me through in my research work.

I also give thanks to my loving parents, Late Special Apostle and Prophetess (Mrs.) J. A. Olaosebikan, for their encouragement and financial support throughout my academic pursuit, I am grateful to them. I appreciate the entire family of Olaosebikan for their love, sacrifices, encouragement and lot more.

I want to express my profound gratitude to all the lecturers in the Department of Mathematics starting with my supervisor, Prof. S. A. Olorunsola, (who is a God chosen father and mentor to me), the head of Department, Dr. (Mrs.) R. B. Ogunrinde, Prof. E. A. Ibijola, Prof. F. M. Aderibigbe, Dr. K. J. Adebayo and others for their support in one way or the other and for their fatherly love towards me during my programme.

I am grateful to my loving and caring wife for her encouragements, support and for her physical, spiritual and moral assistance during the period of my research. I equally appreciate my daughter, Miss Olamiposi Tabitha for her love and understanding.

I would like to express my sincere appreciation to all my relations, friends and well- wishers who are too numerous to mention, that have in one way or the other contributed to the success of this work, most especially my lovely friend, Mr. Ibitoye Azeez for his care, encouragement, financial support and prayers etc. To all, I say, I love you. Thanks and God bless.

## DEDICATION

I dedicate this research to God Almighty who is the architect of my life.

## ABSTRACT

In this research, Euler-Lagrange Method approach, for solving optimal control problems of both one dimensional and generalized form was considered. In years past, calculus of variation, has been used to solve functional optimization problems. However, with some special features in Calculus of Variation technique, making it unique in solving functional unconstrained optimization problems, these features will be advantageous to solving optimal control problems if it can be amended and modified in one way or the other. This call for the Euler-Lagrange Method which is a modification of the Calculus of Variation Method for solving optimal control problems. It is desired that, with the construction of the new algorithm, it will circumvent the difficulties undergone in constructing control operators which are embedded in Conjugate Gradient Method (CGM) for solving optimal control problems. Its application on some test problems have shown improvement in the results compared with existing results of solving this class of problems.

The objective function values for problems 3, 4, 6, 7, 8, 9 and 10 which are: **1.359141**, **-5.000**, **0.36950416**, **0.51699120**, **0.27576806**, **1.5934159 x 10** -[2] and **-3.880763 x 10** -[2] appreciate to the existing results **1.359141**, **-5.000**, **0.4146562**, **0.613969**, **0.2739811**, **1.5935 x 10** -[3] and **-3.9992 x 10** -[2] respectively while the objective function values for problems 1, 2 and 5 do not fully appreciate to the existing results with slight differences. These results is an indication that the method has some advantages over some existing computational techniques built to take care of the said problems.

## LIST OF TABLES

Abbildung in dieser Leseprobe nicht enthalten

## LIST OF FIGURES

Abbildung in dieser Leseprobe nicht enthalten

## CHAPTER ONE

## INTRODUCTION

### 1.0 PREAMBLE

Over the years, optimization has witnessed steadily increasing interest in the application of modern mathematical theories to both social and engineering problems. This development is clearly related to the wide variety of both practical and theoretical interests. The current trend in computer technologies, the availability of high-speed processors and various programming languages permit researchers in various areas of science to investigate and design numerous algorithms to solve engineering problems. However, to construct high precision models or algorithms of a real process prevail on one to start with its mathematical description and analysis in order to obtain specific features of the considered problem.

Many technical and information processes in different areas possess identical mathematical structures, which can be described in a common optimization problem model. Such a generalization permits the development of general algorithms to solve a wide class of problems. We can emphatically say that it is, however, not sufficient to analyze a given problem on a pure theoretical basis. Hence, we find both the theoretical and quantitative methods as indispensable parts of the *modus operandi.* Russell, (1970), opined that in the physical sciences, the social sciences and even biological sciences, the role of mathematical models has become ever more increasingly complex. For instance, as soon as a mathematical description of a process is arrived at, depicting how the process responds to varying factors in its immediate environment, becomes clear to predict, and even to provide an answer, to the question, “what combination of these factors causes the process to operate in the best possible way? ”. By “best possible” we mean, most desirable from the particular viewpoint one may have in mind. For example, we may wish to find the combination of tax and interest rates which causes the national economy to experience growth and stability. There is no doubt that such a state will be most favorable.

It is our desire in this thesis, to study variational techniques for finding the best possible value, action, alternative or decision i.e. the optimum in practical situations. The field that focuses on the techniques for obtaining the best possible alternatives or as a tool for decision making is *Optimization.* Thus, we shall begin our discussion by examining what Optimization really is.

### 1.1 OPTIMIZATION THEORY

Optimization, simply put can be define as the process of making things better. Life is full of optimization problems which all of us are solving, many of them each day in our lives. For instance which of these shortcut routes is closer to the police station? Which grade of groundnut oil is better to buy having the lowest price while giving the lowest cholesterol? Optimization is fine-tuning the inputs of a process, function or device to obtain the maximum or minimum outputs. The inputs are the variables, the function is known as the objective function or performance index, while the output(s) constitute fitness or cost Haupt, et al., (2004). Optimization can be defined as the act for determining the best decision under available circumstances Stephenson, (1971). Optimization is a very broad area of discipline and its purpose is to find the best possible solution to a given problem. Graphically, an optimization problem can be visualized as trying to find the lowest (or highest) point in a complex, highly contoured landscape. It can also be seen as the use of specific methods to obtain the most cost effective and efficient solution to a problem or the design for a process (Edgar and Himmelblau, 2001). In decision making, it is observed that managers of organizations, take many technological and management decisions. Most times the motivation for all such decisions is either to minimize the efforts required or maximize the benefits desired. In the light of this, (Rao, 1990) defines optimization as the process of finding the conditions that give the maximum or minimum value of a function. Also, optimization is an embodiment of knowledge characterized by a process that uses specific methods and possesses alternative decisions or actions with the aim of attaining the “best” possible decision or action among the lot of alternatives or options that can spur an organization into achieving its goals.

As it is well known that, the approach to the principle of optimization was first scribbled centuries ago on the walls of an ancient Roman bathhouse in connection with a choice between two aspirants for emperorship of Rome. According to Thomas and David, (2001), “De doubusmalis, minus est simper aligendum” - meaning, “of two evils, always choose the lesser”. In everyday life, decisions are made to accomplish certain tasks. Normally, there exists several possible ways or methods by which a certain task can be accomplished. Some of these methods may be more efficient or reliable than others and the presence of physical constraints imply that, not just any method can be used. It thus becomes necessary to consciously determine the “best” or “optimal” way to accomplish the task.

Mital, K. V., (1976), defined optimization as, the act of obtaining the best policies to satisfy certain objectives while at the same time satisfying some fixed requirements or constraints. It involves the study of optimality criteria for problems, the determination of algorithmic methods of solution, the study of the structure of such methods and computer implementation of the methods both under trial conditions and real life problems.

Optimization pervades the field of science, engineering, medicine and business. In physics, many different optimal principles have been enunciated, describing natural phenomena in the fields of optics and classical mechanics. The field of statistics treats various principles termed “maximum likelihood”, “minimum cost”, “maximum use of resources”, “minimum effort”, in its efforts to increase profits. A typical engineering problem can be posed as follows: a process can be represented by some equations or perhaps solely by experimental data. One has a single performance criterion in mind such as minimum cost. The goal of optimization is to find the value of the variables in the process that yield the best values of the performance criterion. A trade-off usually exists between capital and operating costs. The described factors - process or model and the performance criterion - constitute the optimization problems.

Igor, el at, (2009), defined optimization as the use of specific methods to determine the most cost-effective and efficient solution to a problem or design for a process. This technique is one of the major quantitative tools in industrial decision making. In an industrial process, for example, the criterion for optimum operations is often in the form of minimum cost and maximum efficiency, where the cost can depend on a large number of interrelated controlled parameters and the performance criteria could be to maximize the number of programs run in a minimum number of hours.

Optimization is the act of obtaining the best result under given circumstances. This usually entails finding the minimum or maximum value of a function called extremum. In design construction and maintenance of any engineering system, engineers have to take many technological and managerial decisions at several stages. The ultimate goal of all such decisions is either to minimize the effort required or maximize the benefit desired. Since the effort required or the benefit desired in any practical situation can be expressed as a function of certain decision variables, optimization can be defined as the process of finding the condition that give the maximum or minimum value of a function. It can be taken to mean minimization since the maximum of a function can be found by seeking the minimum of the negative of the same function. The maximum value of a function is the same as the negative of the minmum value of the function: min(/) = max(-ƒ).

In mathematics and computer science, optimization, or mathematical programming, refers to choosing the best element from some set of available alternatives. In the simplest case, this means solving problems in which one seeks to minimize or maximize a function by systematically choosing the values of real or integer variables from within an allowed set. This formulation, using a scalar, realvalued objective function, is probably the simplest example; as the generalization of optimization theory and techniques to other formulations embraces a large area of applied mathematics. More generally, optimization means finding "best available" values of some objective function given a defined domain, including a variety of different types of objective functions and different types of domains.

The earliest optimization technique, which is known as steepest descent, credited to Augustin- Loius Cauchy. Historically, the first nomenclature, to be introduced, was linear programming invented by George Dantzig in the 1940s. The term programming in this context does not refer to computer programming (although computers are nowadays used extensively to solve mathematical problems). Instead, the term comes from the use of the term “program” by the United States Military to refer to proposed training and logistics schedules, which were the problems that Dantzig was studying at the time. (Additionally, later on, the use of the term "programming" was apparently important for receiving government funding, as it was associated with high-technology research areas that were considered important.)

However, this body of knowledge does not work in isolation, rather it teams with various other areas of Mathematics for the sole aim of unification as acclaimed by Russell, (1970), Ibiejugba, (1985) and Otunta, (1991). One of such areas is functional analysis which is a ready tool for unifying various disciplines, gathering a number of many diverse specialized Mathematical tools into one or a few general generic principles that are quite essential. Thus, having a fore knowledge of this area of Mathematics will go a long way to quicken the understanding of most of the problems that one encounters in the course of studying *optimization.*

### 1.2 CLASSIFICATION OF OPTIMIZATION PROBLEMS

Problems in optimization often emanated from Physics as we can see from the works of great mathematicians like Gauss (Manna, 1976/77), Euler (Boyer and Uta, 1991) and Lagrange (Oliveira, 2002). However, this trend has changed and optimization now draws its problems from all facets of human endeavour. Optimization problems deal with how to do things in the best possible manner. For obvious reasons, the solution of such problems is highly desirable, and it has received an increasing amount of attention in the last couple of years. In fact, Mathematicians have worked on methods for obtaining the optimum for many years, starting most likely with Descartes and Fermat who worked on such problems in the seventeenth century even before the development of the calculus by Newton.

In the light of the forgoing, optimization problems can be classified based on the solution as a *static optimization problem,* if the solution of the problem does not change with time. Such a problem is described by a set of algebraic or transcendental equations, called the Mathematical model. A nonlinear programming problem, with a quadratic objective function, is called a *Quadratic Programming Problem.* The Quadratic programming problem is usually formulated as

Abbildung in dieser Leseprobe nicht enthalten

If both the constraints and the objective function are linear equations, we have a *linear programming problem.* Similarly, if the solution to the optimization problem, is function of time, the problem is called *dynamic optimization* (Wismer and Chattergy, 1978). Such a problem is generally described by a model consisting of a set of differential equations and an objective function, which may be a functional. Based on constraints, we can identify two broad categories of problems viz; constrained optimization problems and unconstrained optimization problems. The constrained optimization problems are problems that are subject to one or more constraints, for both the static and the dynamic cases. The mathematical models represent the constraints which may be further classified as equality constraints or inequality constraints while the unconstrained optimization problems contain no constraints. Thus, for the constrained optimization problems, the objective is to find a set of design parameters that make a prescribed function of these parameters minimum or maximum subject to certain constraints.

We can also classify optimization problems as integer and real-valued programming problems. In integer programming, some or all the design variables of an optimization problem are restricted to take only integer (or discrete) values. For example, problems based on housing units in an estate or bottles of lager beer, will not admit non-integer values. Real - valued programming problems are designed to minimize or maximize a real function by systematically selecting values of real variables within an allowable domain of set of values. This set contains only real values, hence it is called a real - valued programming problem.

Optimization problems can be also classified as deterministic or stochastic programming problems. Deterministic programming problems, are class of problems, where all the design variables are known beforehand. Stochastic programming problems admit cases where some or all the parameters (design variables and/or preassigned parameters) are probabilistic in nature. Examples are problems cast on measurements or estimate of life - span.

Optimization problems may also be classified based on the separability of the objective and constraint functions as separable and non-separable programming problems. By separable problems, we mean situations where these objectives and the constraints are separable. A function is said to be separable if it can be expressed as the sum of n single variable functions and separable programming problems can be expressed in the standard form as:

Abbildung in dieser Leseprobe nicht enthalten

With respect to the number of objective functions, problems in optimization may be classified as single-objective programming problems and multi-objective programming problems. For instance, a multi objective programming problem may take the form of:

Abbildung in dieser Leseprobe nicht enthalten

In all these, we deal with problems in finite-dimensional spaces as in Deb, (2000) and Amouzgar, (2012).

In a holistic view, we can say that optimization has evolved into a mature discipline over the past 60 years and can now be used to solve a host of problems arising from a variety of application areas (Ferris, 2011). In addition, it is worthy of note, to say that while the demonstrated value of optimization in solving standard models of increased size and complexity is of critical importance, we believe that the real value of optimization lies not in solving a single problem, but rather in providing insight and advice on the management of complex systems. Such advice need to be part of an interactive debate with informed decision makers (Ferris *et al* 2009).

**[...]**

- Quote paper
- Olaosebikan Temitayo Emmanuel (Author), 2019, Application of the Euler-Lagrange-Method for solving optimal control problems, Munich, GRIN Verlag, https://www.grin.com/document/506832

Publish now - it's free

Comments