Free online reading
Centro de Investigación y de Estudios Avanzados del
Instituto Politécnico Nacional Unidad Guadalajara
Extracción de Patrones Dinámicos de
Mediciones de Área Amplia mediante
Funciones Ortogonales Empíricas
Tesis que presenta:
Pedro Esquivel Prado
para obtener el grado de:
Doctor en Ciencias
en la especialidad de:
Ingeniería Eléctrica
Director de Tesis:
Dr. Arturo Román Messina
CINVESTAV del IPN Unidad Guadalajara, Guadalajara, Jalisco, Agosto de 2010.
Centro de Investigación y de Estudios Avanzados del
Instituto Politécnico Nacional Unidad Guadalajara
Extraction of Dynamic Patterns from
WideArea Measurements using
Empirical Orthogonal Functions
A thesis presented by:
Pedro Esquivel Prado
to obtain the degree of:
Doctor in Science
in the subject of:
Electrical Engineering
Thesis Advisors:
Dr. Arturo Román Messina
CINVESTAV del IPN Unidad Guadalajara, Guadalajara, Jalisco, August 2010.
Extracción de Patrones Dinámicos de
Mediciones de Área Amplia mediante
Funciones Ortogonales Empíricas
Tesis de doctorado en Ciencias
Ingeniería Eléctrica
Por:
Pedro Esquivel Prado
Maestro en Ciencias en Ingeniería Eléctrica
CINVESTAV del IPN Unidad Guadalajara 20052007
Becario del CONACYT, expediente no. 172551
Director de Tesis:
Dr. Arturo Román Messina
CINVESTAV del IPN Unidad Guadalajara, Agosto de 2010.
Extraction of Dynamic Patterns from
WideArea Measurements using
Empirical Orthogonal Functions
Doctor of Science Thesis
In Electrical Engineering
By:
Pedro Esquivel Prado
Master in Science in Electrical Engineering
CINVESTAV del IPN Unidad Guadalajara 20052007
Scholarship granted by CONACYT, No. 172551
Thesis Advisors:
Dr. Arturo Román Messina
CINVESTAV del IPN Unidad Guadalajara, August 2010.
v
Acknowledgments
My parents: Ignacio Esquivel Alcaraz and Lilia Prado Robles for giving me all
their love, comprehension and for motivating me to study.
My brothers: María Inés, José Alejandro, Ezequiel, Olivia, Mauricio, María
Reyna Leticia and to Patricia for giving me all their support.
I would like to express my sincere thanks to my thesis supervisor, Ph. D. Arturo
Román Messina for providing me with the necessary support, advice, facilities
and the enthusiasm required to successfully complete this thesis; the search for
relevant literature is very much appreciated.
I wish to thank my professors and friends at CINVESTAV Guadalajara, as well
as to my friends at my birth place (El Papalote, Nayarit, México).
Finally, acknowledgements are also due to CONACYT for its support.
vi
Resumen
Las técnicas de análisis de datos estadísticos multivariables ofrecen una
poderosa herramienta para analizar la respuesta en sistemas de potencia a
partir de datos medidos. En esta tesis, un marco estadístico basado en los datos
que integra el uso del análisis de las funciones ortogonales empíricas complejas
y el método de snapshots es propuesto para identificar y extraer patrones
espaciotemporal independientes dinámicamente a partir de mediciones
fasoriales sincronizadas. El procedimiento permite la identificación de los
patrones espaciales y temporales dominantes en un conjunto de datos
complejos y es particularmente conveniente para el estudio de características
estacionarias y de propagación que pueden ser asociadas con oscilaciones
electromecánicas en sistemas de potencia. Se muestra que, además de
proporcionar información espacial y temporal, el método mejora la habilidad
del análisis de la correlación convencional para capturar eventos temporales y
proporciona un resultado cuantitativo tanto para la amplitud como para la fase
del movimiento; las cuales son esenciales en la interpretación y caracterización
de procesos transitorios en sistemas de potencia. La eficiencia y exactitud del
procedimiento desarrollado para capturar la evolución temporal del contenido
modal de los datos a partir de medidas fasoriales sincronizadas en tiempo de
un evento real en México es evaluado. Los resultados muestran que el método
propuesto puede proporcionar estimación exacta de efectos no estacionarios,
frecuencia modal, forma del modo variante en el tiempo, y los instantes de
tiempo del comportamiento transitorio irregular o intermitente asociado con
cambios abruptos en la topología del sistema o condiciones de operación.
vii
Abstract
Multivariate statistical data analysis techniques offer a powerful tool for
analyzing power system response from measured data. In this thesis, a
statisticallybased, datadriven framework that integrates the use of complex
wavelet empirical orthogonal function (EOF) analysis and the method of
snapshots is proposed to identify and extract dynamically independent spatio
temporal patterns from time synchronized data. The procedure allows
identification of the dominant spatial and temporal patterns in a complex data
set and is particularly well suited for the study of standing and propagating
features that can be associated with electromechanical oscillations in power
systems. It is shown that, in addition to providing spatial and temporal
information, the method improves the ability of conventional correlation
analysis to capture temporal events and gives a quantitative result for both the
amplitude and phase of motions, which are essential in the interpretation and
characterization of transient processes in power systems. The efficiency and
accuracy of the developed procedures for capturing the temporal evolution of
the modal content of data from time synchronized phasor measurements of a
real event in México is assessed. Results show that the proposed method can
provide accurate estimation of nonstationary effects, modal frequency, time
varying mode shapes, and time instants of intermittent or irregular transient
behavior associated with abrupt changes in system topology or operating
conditions.
viii
Index
Chapter 1 ... 1
Introduction
1.1 Background and Motivation ... 2
1.2 Statement of the Problem ... 3
1.3 A Brief Review of Previous Work ... 5
1.3.1 ModelBased Approaches ... 5
1.3.2 MeasurementBased Techniques ... 6
1.3.3 SpatioTemporal Statistical Models ... 7
1.4 Thesis Objectives ... 8
1.5 Contributions ... 9
1.6 Summary and Outline of the Dissertation ...10
1.7 Publications...11
1.7.1 Refereed conferences ...11
1.7.2 Refereed journals ...11
1.7.3 Book chapters ...12
1.8 References ...12
Chapter 2 ... 15
Empirical Orthogonal Function Analysis
2.1 Notation ...16
2.2 Theoretical Development ...16
2.3 Discrete Domain Representation ...20
2.4 The Method of Snapshots...22
ix
2.5 Energy Relationships ...24
2.6 Interpretation of EOFs Using Singular Value Decomposition ...25
2.6.1 Singular Value Decomposition ...26
2.6.2 Relation with the Eigenvalue Decomposition ...27
2.6.3 Geometric Interpretation Using SVD ...29
2.7 Relation Between POMs and Normal Modes ...29
2.7.1 The Notion of Normal Modes ...29
2.7.2 Physical Interpretation of POMs in Terms of the Eigenvalues ...31
2.7.3 Physical Interpretation of POMs Using Singular Value Decomposition ...33
2.8 References ...36
Chapter 3 ... 38
Complex Empirical Orthogonal Function
3.1 Complex EOF Analysis ...39
3.2 Theoretical fundamentals of Empirical Orthogonal Functions...41
3.3 ComplexVector EOF Analysis ...44
3.4 Analysis of Propagating Features ...46
3.4.1 Spatial Amplitude Function,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
S
S
...47
3.4.2 Spatial Phase Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
II
I
...47
3.4.3 Temporal Amplitude Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
R
R
...47
3.4.4 Temporal Phase Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
...48
3.4.5 SpatioTemporal Analysis ...49
3.4.6 Motivating Examples: Modeling of Propagating Waves Using the Covariance
Matrix ...52
3.4.7 Coherency Identification from Measurements Data ...59
3.4.8 Mode Shape Identification ...60
3.5 Numerical Computation of POMs: The Algorithm ...60
3.5.1 Numerical Example ...61
3.6 Concluding Remarks ...68
3.7 References ...68
x
Chapter 4 ... 70
NearRealTime of Measured Data Using Complex EOF
Analysis
4.1 Motivation ...71
4.2 Analytical Aspects of TimeDependent EOFs ...71
4.3 NearRealTime EOF Analysis ...72
4.3.1 TimeDependent Covariance Matrix ...74
4.3.2 Spectral Decomposition of TimeDependent Covariance Matrix ...77
4.3.3 TimeDependent Basis Functions ...78
4.3.4 Feature Extraction ...79
4.3.5 Wave Components ...79
4.4 Characterization of SpatioTemporal Behavior ...82
4.4.1 Temporal Characterization ...82
4.4.2 Spatial Characterization...82
4.5 Proper Orthogonal Decomposition of Nonstationary Random Processes ...83
4.5.1 Statistics of Random Processes ...84
4.5.2 Stationarity of Random Process ...85
4.5.3 Moments of Random Variables ...85
4.5.4 Expectation Value and Standard Deviation ...86
4.5.5 Recursive Algorithm ...88
4.6 Multiscale EOF Analysis ...90
4.6.1 WaveletBased EOF Analysis ...90
4.6.2 Empirical Mode DecompositionBased EOF Analysis ...91
4.7 Conclusions ...93
4.8 References ...94
Chapter 5 ... 97
Application
5.1 Complex Empirical Orthogonal Function Analysis of PowerSystem Oscillatory
Dynamics ...98
xi
5.1.1 Application to Time Synchronized Measured Data ...98
5.1.2 Construction of POD Modes via the Method of Snapshots ... 101
5.2 Spatiotemporal Analysis of Measured Data ... 102
5.2.1 Temporal Properties ... 105
5.2.2 Frequency Determination from Instantaneous Phases ... 107
5.2.3 Mode Shape Estimation ... 108
5.2.4 Energy Distribution ... 109
5.3 Extraction in nearrealtime of propagating Features of PowerSystem Oscillatory
Dynamics ... 110
5.4 Discussion ... 116
5.5 References ... 117
Chapter 6 ... 119
Conclusions and Recommendations for Future Work
6.1 General Conclusions ... 119
6.2 Recommendations for Future Work ... 121
Appendix A ... 123
EOFs in the Presence of Measurement Error
A.1 References...125
Appendix B ... 126
Optimal Linear Predictor
B.1 References...129
Appendix C ... 130
Correlation Function Using Linear Instantaneous Estimates
xii
C.1 References...131
xiii
Index of Figures and Tables
Figures
Chapter 3
Fig. 3. 1. Conceptual diagram illustrating the phenomenon of temporal and
spatial energy in a field.
... 43
Fig. 3. 2. Conceptual view of the proposed algorithm.
... 61
Fig. 3. 3. Spacetime evolution of propagating wave on proposed field.
... 62
Fig. 3. 4. Standing wave estimated from original field.
... 63
Fig. 3. 5. Frequency spectrum.
... 64
Fig. 3. 6. Amplitude of the standing wave
... 64
Fig. 3. 7. Travelling wave estimated from field original.
... 65
Fig. 3. 8. Frequency spectrum.
... 66
Fig. 3. 9. Wave constant.
... 67
Chapter 4
Fig. 4. 1. Visualization of PMU data in terms of spatial and temporal formation
...73
Fig. 4. 2. Conceptual diagram of the sliding windowbased approach...75
Fig. 4. 3. Proposed algorithm to the complex EOF identification...89
xiv
Chapter 5
Fig. 5. 1. Schematic of the MIS system showing the location of the observed
oscillations. Measurement locations are indicated by shaded circles.
... 99
Fig. 5. 2. Time traces of recorded bus frequency swings recorded January 1, 2004
and detail of the oscillation build up.
... 100
Fig. 5. 3. TimeVarying means and fluctuating components of the recorded bus
frequency signals. The time series are smoothed using wavelet shrinkage.
... 104
Fig. 5. 4. Comparison of the Fourier transform spectrum of the original signal
and the spectra constructed with the instantaneous mean removed.
... 104
Fig. 5. 5. Energy captured as a function of the number of modes. The percentage
of energy located in the jth mode is measured by
¦
n
i
i
i
E
1
100
.
... 104
Fig. 5. 6. Reconstruction of the original data using the three leading POMs. Solid
line represent the original time series and dotted lines represent the composite
oscillation obtained by adding the temporal modes.
... 105
Fig. 5. 7. Temporal patterns of variability associated with the dominant mode.
... 106
Fig. 5. 8. Instantaneous frequency of POM 1. Time interval 06:27:4206:29:39.
107
Fig. 5. 9. Mode shape of POM 1 for various time intervals of interest.
... 109
Fig. 5. 10. Energy distribution.
... 110
Fig. 5. 11. Time trace of bus frequency swings recorded on January 1, 2004 and
detail of the oscillation build up.
... 111
Fig. 5. 12. Reconstruction of the original data with the traveling wave mode
identified using the conventional (offline) and recursive (online,
2
W
) EOF
analysis.
... 113
xv
Fig. 5. 13. (Upper) average energy of travelling wave mode using the
conventional (offline) EOF analysis, and (lower) instantaneous energy of
travelling wave mode using the recursive (online,
2
W
) EOF analysis.
... 114
Fig. 5. 14. Instantaneous frequency of dominant mode.
... 114
Fig. 5. 15. (Upper) mode shape of travelling wave mode using the conventional
(offline) EOF analysis, and (lower) timevarying mode shape of travelling wave
mode using the recursive (online,
2
W
) EOF analysis.
... 115
Fig. 5. 16. Spatial patterns of the leading mode showing temporal variability.
116
Tables
Chapter 5
Table 5. 1. RMS error in EOF approximation...113
1
Chapter 1
Introduction
Many physical processes in power systems involve variability over both space
and time. This variability can be quite complicated, including variability caused
by protective control actions, automatic control, and load characteristics.
This introductory chapter presents an outline of the research work in this
thesis, and defines concepts linked to power system dynamic behavior.
Furthermore, the motivation for the study of spatial and temporal variability in
power system regimes is presented. The discussion will apply equally well to
numerical or measured data.
The objectives of the work are stated and the main contributions are
summarized. An outline of the structure of the thesis is also given.
2
1.1 Background and Motivation
Many dynamic processes arising in power system behavior involve phenomena
that vary in time and space in complicated ways [14]. Examples of such
processes include interarea oscillations between generators or interconnected
electrical systems and frequency stability [58].
Interarea oscillations of electromechanical nature have been frequently
observed in large interconnected power systems involving areas or groups of
generators swinging against each other. Widearea monitoring systems offer the
possibility to analyze and characterize complicated oscillations involving the
exchange of kinetic energy between machines or electrical areas in large,
interconnected power systems [313]. As the number of measured signals
increase, accurate characterization of relevant modal behavior becomes
difficult.
Complex data sets obtained from synchronized measurements at various
system locations tend to produce a large amount of information. Moreover,
different faults may result in different dynamic patterns thus requiring the use
of correlation statisticallybased techniques to examine dynamic trends and
phase relationships.
Characterization of spatial and temporal changes in the dynamic pattern of
system oscillations is a problem of great theoretical and practical importance
that has been barely studied.
In the study of complicated interarea oscillations it is of importance to
analyze not only the temporal behavior of the underlying phenomena, but also
the physical (spatial) distribution of the phenomena across the system. This
phenomenon has some similarities with the analysis of multivariate data such
as that obtained from measurements of wave phenomena, climate variations or
turbulence in complex flow. Further, propagating features should be identified
in order to assess control requirements and the extent and degree of
nonlinearity.
3
Multivariate data analysis techniques provide a useful means to cope with
increasing complexity and information from widearea monitoring schemes, to
allow statistical data to predict future performance, and to deal with existing
and new uncertainties [1,4,813].
Modelling and controlling such phenomena is a complex spatiotemporal
dynamical problem with important practical applications for which few
analytical models have been proposed. The computation time required to solve
large interconnected systems is large and many become prohibitive for practical
systems. To reduce the complexity of the problem, several simplifications are
commonly used that may result in a poor characterization of global system
behavior.
The study of analytical techniques to extract the most significant information
from simultaneous measurements of system oscillations constitutes the main
focus of this work.
1.2 Statement of the Problem
As noted earlier, phenomena observed in power system oscillatory dynamics
are diverse and complex. Power system ambient data measurements are known
to exhibit noisy, nonstationary fluctuations resulting from random changes in
load and other random phenomena. Forced oscillations resulting from major
events, on the other hand, may contain motions with a broad range of temporal
scales and can be highly nonlinear and timevarying.
Lowfrequency system oscillations have been experienced in many power
systems posing a major obstacle to normal operation and severely limiting
transmission capabilities.
System studies on the nature of these oscillations are based on modal
analysis of analytical models of system behavior or in sequential analysis of
time series obtained from simulation of the nonlinear model or measured data
[1]. The problem of selecting the most significant modes is of considerable
interest and has been studied intensively by several researchers.
4
Approaches such as Prony analysis, eigenrealization algorithms, and auto
regressive processes, among others, have been successfully used to characterize
modal behavior based on a single data set. Linear analysis of power system
models can also be used to extract modal information as well as to design
system controllers [6,1420]. However, such an approach is computationally
intensive since a large number of studies is needed to estimate various
contingency scenarios and cannot be directly applied to multivariate data.
Due to the size and dimension of realistic power system models, the volume
of information that is required to characterize the interarea mode phenomenon
from measured data is very high. This includes the determination of dominant
oscillatory modes, the analysis of phase relationships between key system
signals and the extraction of dynamic trends and nonlinearities.
A second challenge of phasor measurements unit (PMU) applications is how
to utilize the large volumes of data. Ways are needed to conveniently
summarize and visualize the large amount of data generated by dynamic
system recorders, and extract the information relevant to the phenomenon of
interest.
Further, data gaps, and degraded spatial resolutions could result in biased
or poor estimates of power system electromechanical modes.
When simultaneously measured responses throughout a system are
available, dominant linear modes should be extracted using correlation
techniques rather that individual analysis of the system response. This provides
a global picture of system behavior and enables statistical characterization of
the observed phenomena [2122].
Large stressed interconnected power systems exhibit nonlinear dynamic
performance when are subjected to large disturbances. The extraction of
significant dynamics from multiple measurements is a challenging problem.
Factors such as noise, trends and nonlinearities may affect modal extraction or
preclude the identification of phase relationships among output signals.
5
Due to its large number of degreesoffreedom, and the sparse geographical
distribution, power systems exhibit highly complex phenomena including
modal interactions, temporarily chaotic vibrations, and intermodulation [6].
An understanding of these phenomena is necessary if the dynamics and
ultimately the propagating nature of these oscillations are to be predicted.
To determine the mechanisms governing these physical variations, it is
essential to characterize the largescale interactions between the system modes.
Today widearea analyses of intersystem oscillations require a systematic
approach that extracts significant information. Multivariate data analysis
techniques offer a powerful tool to analyze sets of data obtained from
simulations or measurements that supplements information on conventional
analysis techniques.
An appealing feature of these approaches is their ability to extract more
energy fluctuations than conventional linear decomposition techniques. Also
the modal representations of the observed data are physically interpretable and
provide meaningful insights into the observed dynamics.
We provide here a brief summary of literature describing the application of
statistical techniques to nonlinear oscillatory problems in power systems.
1.3 A Brief Review of Previous Work
1.3.1 ModelBased Approaches
Modelbased approaches have been successfully used to extract modal
information as well as to synthesize transfer functions and to design system
controllers.
These approaches are based on analytical approximations obtained from the
series expansion of the nonlinear power system model around a given
operating condition. This theory has been employed to determine both, the
interacting modes of oscillation and the degree of nonlinearity.
6
Recently, nonlinear interaction between the fundamental modes of
oscillation has been studied using normal form theory and bilinear analysis
techniques [8,10,23].
Because these techniques are based on analytical models, reliable and
accurate methods for modal analysis of large system models are critically
needed for determining dynamic information. In addition, models needed to
capture nonlinear phenomena require large, complex analytical formulations
that makes unsuitable for the study of practical systems or characterization of
random behavior.
With the increasing utilization of measurement devices throughout the
system, measurementbased techniques are required that supplement
information to analytical models.
1.3.2 MeasurementBased Techniques
Measured data provide a useful complement to modelbased methods in
detecting temporal and frequency variability directly from timesynchronized
measurements.
Signal processing methods for estimating power system electromechanical
modes can, in turn, be broadly classified into two categories: ringdown
algorithms and modemeter methods. The second approach addresses the
problem of estimating system behavior from power system ambient noise
measurements.
A large collection of methods have been investigated for extracting modal
information from measured system response. These methods include such
diverse strategies as Ritz vector reduction, rational approximation, and proper
orthogonal decomposition. Despite the considerable collection of research for
extracting modal information, there remain significant barriers to the extraction
of relevant information, specifically for the analysis of realtime measurements.
There are three significant drawbacks to such an approach. The first is that
existing algorithms are restricted to essentially linear signals. The second is the
large amount of data to be processed for accurate mode extraction.
7
In addition, eventbased oscillations cannot be evaluated in isolation from
other events. It is clear from the above review that statistical analysis tools are
needed with the ability to treat large data sets, and enable extraction of
dominant patterns.
1.3.3 SpatioTemporal Statistical Models
Statistical techniques have been widely used in many engineering and science
applications. These include Geophysical research, climate variations and
vibratory problems [2431]. Among these techniques, the proper orthogonal
decomposition (POD) method (also called empirical orthogonal function
analysis and KarhunenLoève technique) has been shown to be capable of
representing complicated phenomena with a handful of degrees of freedom
[12].
Recently, several methods have been proposed for spatiotemporal analysis
of measured data. Among the several methods available of analysis, empirical
orthogonal function (EOF) analysis is a particularly useful tool in studying large
quantities of multivariate data [32]. The decomposition generates a physically
motivated basis for the data. More recently, these techniques have gained wide
popularity in applications related to data analysis and reducedorder modelling
of various physical processes or models.
Applications, for example, to unsteady fluid flow, turbulence, aerospace,
optimal control, structural dynamics, microstructural design, solution of
stochastic partial differential equations, heat transfer and nondestructive
testing and system identification have been reported [1,1415,22,2432].
In recent years there have been several studies in which empirical
orthogonal function analyses were used to highlight potential physical
mechanisms associated with system variability. One of the unresolved issues is
the estimation of propagating and standing features that can be associated with
observed or measured data and their application to realtime data.
A limited number of studies on statistical analysis of mode propagation in
nonlinear power system models exist. In this context a number of recent studies
have sought to identify such interactions and their associated time scales, by
8
analyzing both observational records and the results of power system
simulations [1415].
In this thesis, we extend these formulations, and use PODbased methods to
extract modal information from measured data.
The conceptual framework developed provides rigorous bases for the
analysis, detection, and simplification of complex oscillations, and enables the
simultaneous study of synchronized measurements.
The advantage of this technique is that is easy to implement, efficient for
shorttime records, and is based on the covariance structure of the data. The
approach is general and can be used to study both, measured data and
simulated data.
Primary applications of statistical methods for studying oscillatory dynamic
systems include modal extraction and management of disturbance recording.
1.4 Thesis Objectives
The primary objective of this research is to develop an analytical, multiscale
data analysis framework for identifying the dominant spatial, temporal and
frequency variability regimes among critical modes from realtime measured
data. Following the above problem, the specific objectives of this research are:
1. To develop a systematic, datadriven lowdimensional representation of
complex intersystem oscillations based on proper orthogonal
decomposition for the analysis, characterization and simplification of
oscillatory processes. An approach to the joint estimation of temporal
frequency and phase characteristics is to this end developed.
2. To extend existing approaches to characterize the spatiotemporal dynamics
of interarea oscillations using nearrealtime information.
3. To analyze spatiotemporal complexity and the nature of system
oscillations. Unlike other spectral analysis techniques which attempt to fit a
spectrum to a presupposed model function, the proposed technique makes
9
no assumptions concerning spectral characteristics and can treat both spatial
and temporal information.
Other identified objectives are:
x To evaluate the performance of the method under noisy, nonlinear
conditions using realtime information.
x To assess the ability of EOFbased models for efficient dynamic modelling of
large data sets.
x To address numerical issues associated with the analysis of large, complex
informating resulting from transient oscillations.
1.5 Contributions
The main original contributions of this research are:
1. The development of a mathematical and computational framework for data
driven quantification of linear and nonlinear phenomena in physical
processes.
2. The derivation of an analytical framework to extract modal information.
Specifically the development of efficient algorithms to extract mode shapes,
and phase relationships between selected variables or signals. An approach
to spacetime prediction that achieves dimension reduction and that uses a
statistical model is also presented.
3. The generalization of existing approaches to account for complex
formulations of the proper orthogonal decomposition and its numerical
implementation.
4. The evaluation of alternative formulations based on singular value
decomposition analysis to characterize standing and propagating features in
complex phenomena. An extension of this method handles the temporal
variation commonly present in propagating waves.
10
5. The derivation of analytical criteria to characterize physical behavior in
terms of physically meaningful modes in mechanical systems.
1.6 Summary and Outline of the Dissertation
This dissertation is divided into six chapters and three appendices. A brief
summary of each chapter is described below.
Chapter 1 contains the background and the overview of this thesis. The
organization of the remainder of this dissertation is as follows.
In Chapter 2, an overview of the basic concepts and approximations leading
to the empirical orthogonal function and singular value decomposition
methods is given, together with an introduction to numerical aspects for proper
orthogonal decomposition.
Chapter 3 proposes a complex POD formulation that combines proper
orthogonal decomposition with Hilbert analysis. The methodology for POD
analysis is developed in detail, and its relation to other techniques is described.
In Chapter 4 a statisticallybased datadriven framework that integrates the
use of waveletbased EOF analysis and a slidingwindow based method is
proposed to identify and extract, in nearrealtime, dynamically independent
spatiotemporal patterns from timesynchronized measurements.
Different
orthogonal systems are investigated including orthogonal wavelets, Hilbert
analysis, and empirical orthogonal functions. The developed methods combine
the ability of empirical orthogonal function analysis to decorrelate complex
vector fields with that of wavelet analysis to extract deterministic features.
WaveletEOF analysis is particularly appropriate for the modelling of transient
data containing contributions from events whose behavior changes over time
and frequency.
Chapter 5 discusses the practical application of the developed
methodologies to the study of a practical power system. The salient features of
the proposed methodology are demonstrated by applying it on measured data
sets.
11
Finally, some concluding remarks, suggestions for future work and
significant contributions, are presented in Chapter 6.
Certain details of the implementation of the methodology are relegated to an
appendix. The appendices cover the analysis of EOFs in the presence of
measurement error, optimal linear prediction, the analysis of correlation
functions using linear estimates.
1.7 Publications
The following publications have resulted from this work:
1.7.1 Refereed conferences
1. P. Esquivel, and A. R. Messina, "Complex Empirical Orthogonal Function
Analysis of WideArea System Dynamics," IEEE Power and Energy Society,
General Meeting (PES)
, Pittsburgh, Pennsylvania USA, 2008.
2. P. Esquivel P., "WideArea Wave Motion Analysis Using Complex
Empirical Orthogonal Functions," International Conference on Electrical
Engineering, Computing Science and Automatic Control (ICEEE)
, Toluca, Estado
de México, México, 2009.
3. P. Esquivel, and A. R. Messina, "TimeDependent EOF Analysis of Power
System Measurements," International Council on Large Electric Systems
(CIGRÉ), Ontario, Toronto Canada, 2009.
4. F. Lezama Z., A. L. Rios, P. Esquivel, and A. R. Messina, "A HilbertHuang
Based Approach for OnLine Extraction of Modal Behavior from PMU
Data," North American Power Symposium (NAPS), Starkville, Mississippi
USA, 2009.
1.7.2 Refereed journals
1. A. R. Messina, P. Esquivel and F. Lezama, "TimeDependent Statistical
Analysis of WideArea TimeSynchronized Data," Journal of Mathematical
Problems in Engineering
(MPE), Special Issue on Short Range Phenomena:
Modeling, Computational Aspects, and Applications, 2010.
12
2. P. Esquivel, and A. R. Messina, "SpatioTemporal Decorrelation based in
Statistical Analysis from WideArea Measurements," to be submitted to
International Journal of Electric Power System Research
, 2010.
1.7.3 Book chapters
1. P. Esquivel, E. Barocio, M. A. Andrade, and F. Lezama, "Complex Empirical
Orthogonal Function Analysis of PowerSystem Oscillatory Dynamics", in:
"Analysis of Nonlinear and Nonstationary Oscillations: A TimeFrequency
Perspective," Book Chapter, Springer Verlag, 2009.
1.8 References
[1] A. R. Messina, "Interarea Oscillations In Power Systems: A Nonlinear and
Nonstationary Perspective," Springer Verlag, 2009.
[2] T. Hiyama, D. Kojima, K. Ohtsu, and K. Furukawa, "Eigenvaluebased Wide area
stability monitoring of power systems," Control Engineering Practice, vol. 13, pp.
15151523, 2005.
[3] D. Hu, and V. Venkatasubramanian, "New widearea algorithms for detecting
angle instability using synchrophasors," Proceedings of the Western Protective Relay
Conference
, pp. 137, 2006.
[4] R. AvilaRosales, and J. Giri, "Widearea monitoring and control for power system
grid security," 15
th
[5] D. Y. Wong, G. J. Roger, B. Porretta, and P. Kundur, "Eigenvalue analysis of very
large power systems," IEEE Transactions on Power Systems, vol. 3, no. 2, pp. 472
480, May 1988.
PSCC, Liege
, pp. 17, 2005.
[6] P. Kundur, "Power system stability and control," New York: McGrawHill, Inc.,
1994.
[7] V. Vittal, N. Bhatia, and A. A. Fouad, "Analysis of the interarea mode phenomena
in power systems following large disturbances," IEEE Transactions on Power
Systems
, vol. 6, no. 2, pp. 15151521, Nov., 1991.
[8] D. RuizVega, A. R. Messina, and G. EnriquezHarper, "Analysis of interarea
oscillations via nonlinear time series analysis techniques," 15
th
[9] J. Arroyo, E. Barocio, R. Betancourt, and A. R. Messina, "A bilinear analysis
technique for detection and quantification of nonlinear modal interaction in power
systems", in Proc. 2006 IEEE Power Engineering Society General Meeting, pp. 18.
PSCC
, pp. 17,
Aug., 2005.
13
[10] Pacific Northwest National Laboratory, U. S. Department of Energy, "The role of
synchronized wide area measurements for electrical power grid operations,"
Networked Embedded Control for Cyber Physical Systems
(HCSSNEC4CPS), 2006.
[11] H. Ukai, K. Nakamura, and N. Matsui, "DSP and GPSbased synchronized
measurement system of harmonics in widearea distribution system," IEEE
Transactions on Industrial Electronics
, vol. 50, no. 6, pp. 11591164, Dic., 2003.
[12] A. R. Messina, and V. Vittal, "Extraction of dynamic patterns from widearea
measurements using empirical orthogonal functions," IEEE Transactions on Power
Systems
, vol. 22, no. 2, pp. 682692, May 2007.
[13] A. R. Messina, V. Vittal, D. RuizVega, and G. EnriquezHarper, "Interpretation
and visualization of Widearea PMU measurements using Hilbert analysis," IEEE
Transactions on Power Systems
, vol. 21, pp. 17631771, no. 4, Nov., 2006.
[14] A. Hannachi, I. T. Jolliffe, and D. B. Stephenson, "Review. Empirical orthogonal
functions and related techniques in atmospheric science: a review," International
Journal of Climatology
, vol. 27, pp. 11191152, May 2007.
[15] E. Campbell, "A review of methods for statistical climate forecasting," Technical
Report
06/134, Aug., 2006.
[16] D. J. Trudnowski, J. M. Johnson, and J. F. Hauer, "Making Prony analysis more
accurate using multiple signals," IEEE Transactions on Power Systems, vol. 14, no. 1,
pp. 226231, Feb., 1999.
[17] R. F. Jeffers, "Techniques for widearea state estimation in power systems," A
thesis submitted to the faculty of the Virginia polytechnic institute and state
university, 2006.
[18] P.F. Bach, "Wide area control and long distance transfer of electric power," IEA
Workshop
, pp. 17, Nov., 2004.
[19] J. F. Hauer, and J. G. DeSteese, "A tutorial on detection and characterization of
special behavior in large electric power systems," Pacific Northwest National
Laboratory
, July 2004.
[20] J. Arroyo, R. Betancourt, A.R. Messina, E. Barocio, "Development of Bilinear
Power System Representations for Small Signal Stability Analysis", Electric Power
Systems Research
, vol. 77, no. 10, pp. 12391248, Aug., 2007.
[21] P. Esquivel and A. R. Messina, "Complex Empirical Orthogonal Function Analysis
of WideArea System Dynamics," Power and Energy Society, General Meeting IEEE,
pp. 17, July 2008.
[22] P. Holmes, J. L. Lumley, and G. Berkooz, "Turbulence, coherent structures,
dynamical systems and symmetry," New York: Cambridge University Press, 1996.
14
[23] J. Arroyo, E. Barocio, R. J. Betancourt, A. R. Messina, "Quantifying nonlinearity in
power systems using normal forms theory and higherorder statistics", In Proc.
2005
IEEE PES General Meeting
, pp. 18.
[24] J. F. Hauer, N. B. Bhatt, K. Shah, and S. Kolluri, " Performance of WAMS East in
providing dynamical information for the North East blackout of august 14, 2003,"
The Bonneville Power Administration
, pp. 16, 2005.
[25] T. P. Barnett, "Interaction of the Monsoon and pacific trade wind system at
interannual time scales. Part 1: the equatorial zone," Monthly weather review, vol.
111, pp. 756773, 1983.
[26] M. A. Merrifield, and R. T. Guza, "Notes and correspondence. Detecting
propagating signals with complex empirical orthogonal functions: a cautionary
note," Journal of physical oceanography, pp. 16281633, Oct., 1990.
[27] J. Terradas, R. Oliver, and J. L. Ballester, "Application of statistical techniques to
the analysis of solar coronal oscillations," The Astrophysical Journal, vol. 614, pp.
435447, Oct., 2004.
[28] D. H. Chambers, R. J. Adrian, P. Moin, D. S. Stewart, and H. J. Sung, "Karhunen
Loève expansion of Burgers´ model of turbulence," Physics Fluids, vol. 31, no. 9,
pp. 25732582, Sep., 1988.
[29] S. Han, and B. Feeny, "Application of proper orthogonal decomposition to
structural vibration analysis," Mechanical systems and signal processing, vol. 17, no.
5, pp. 9891001, 2003.
[30] S. Toh, "Statistical model with localized structures describing the spatialtemporal
chaos of KuramotoSivashinsky equation," Journal of the Physical Society of Japan,
vol. 56, no. 3, pp. 949962, Mar., 1987.
[31] D. J. Trudnowski, "Estimating Electromechanical Mode Shape from
Synchrophasor measurements," IEEE Transactions on Power Systems, vol. 23, no. 3,
pp. 11881195, August 2008.
[32] R. P. Pacheco, and V. Steffen Jr., "On the identification of nonlinear mechanical
systems using orthogonal functions," International Journal of Nonlinear Mechanics,
vol. 39, pp. 11471159, 2004.
15
Chapter 2
Empirical Orthogonal Function
Analysis
Empirical orthogonal function (EOF) analysis is a statistical method of finding
optimal distributions of energy from an ensemble of multidimensional
measurements. The essential idea is to generate an optimal basis for the
representation of an ensemble of data collected from measurements or
numerical simulations of a dynamic system.
Given an ensemble of measured data, the technique yields an orthogonal
basis for the representation of the ensemble, as well as a measure of the relative
contribution of each basis function to the total energy with no a priori
assumption on either spatial or temporal behavior. The method can serve two
purposes, namely order reduction by projecting highdimensional data into a
lower dimensional space and feature extraction by revealing relevant, but
unexpected, structure hidden in the data.
The following sections provide a review of some aspects of the qualitative
theory of empirical orthogonal functions that are needed in the analysis of low
dimensional models derived from the technique.
We start by introducing the method in the context of statistical correlation
theory.
16
2.1 Notation
Most of the notation is standard. The Euclidean norm of a vector
x
is denoted
by
x
. Other norms are indicated by double bars:
.
. Vectorial quantities are
denoted by boldface letters and scalar quantities by italic letters.
The inner (scalar, o dot) product of two elements, u, v in
n
is given by
¦
n
i
i
i
v
u
1
)
,
(
v
u
2
L
denotes the space of square integrable real or complexvalued functions.
Thus,
]
1
,
0
[
2
L
denotes the space of functions defined over the unit interval
1
0
d
d x
. The inner product space is defined by
³
:
dx
x
g
x
f
g
f
)
(
)
(
)
,
(
*
where * denotes the complex conjugate.
Other symbols are defined in the text.
2.2 Theoretical Development
The proper orthogonal decomposition (POD) method is an optimal technique of
finding a basis that spans an ensemble of data, collected from an experiment or
numerical simulations [14]. More precisely, assume that
)
,
(
k
j
t
x
u
,
n
j
,
,
1
"
,
N
k
,
,
1
"
, denotes a sequence of observations on some domain
:
x
where
x
is a vector of spatial variables, and
]
,
0
[ T
t
k
is the time at which the
observations are made.
Without loss of generality, the time average of the time sequence
¦
N
k
k
k
m
t
u
N
t
u
u
1
)
,
(
1
)
,
(
)
(
X
X
X
is assumed to be zero [3]. Generalizations to this approach are discussed below.
17
The POD procedure determines empirical orthogonal functions (EOFs),
f
,...
1
,
)
(
i
x
i
, such that the projection onto the first p EOFs (a low order
representation)
N
k
n
j
x
t
a
t
x
u
p
i
i
i
k
j
,
,
1
,
,...,
1
,
)
(
)
(
)
,
(
^
1
"
¦
M
(2.1)
is optimal in the sense that the average leastsquares truncation error,
j
H
N
p
x
t
a
t
x
u
p
i
i
i
k
j
j
d
¦
,
2
1
)
(
)
(
)
,
(
M
H
(2.2)
is minimized for any
N
p
d
, where
!
.
denotes ensemble average,
2
/
1
,
!
f
f
f
, and . denotes the
2
L
norm over
: . The
i
a
's are time
dependent coefficients of the decomposition to be determined so that (2.1)
results in a maximum for (2.2). These special orthogonal functions are called the
proper orthogonal modes (POMs) of the reduced basis for
)
,
(
k
j
t
x
u
. The
computation of the EOFs in the presence of measurement error is given in
Appendix A.
Following [1], assume that the field
)
,
(
k
j
t
x
u
is decomposed into a mean
value
)
,
(
t
x
u
j
m
, and a fluctuating part
)
,
(
j
t
x
u
)
,
(
)
,
(
)
,
(
k
j
k
j
m
k
j
t
x
u
t
x
u
t
x
u
(2.3)
More formally, let
2
L
denote the space of square integrable functions. It
follows that, a normalized basis function
M is optimal if the average projection
of
u
onto
M is maximized, i.e., [2]
1
to
subject

)
),
,
(
(

max
2
2
])
1
,
0
([
2
t
x
u
k
j
L
(2.4)
where the inner product is defined as
V
U
V
U
u
!
,
, and
¦
!
m
j
j
T
1
2
2
,
M
M
M
M
M
M
18
The optimization problem can be recast in the form of a functional for the
constrained variational [3]
1
)
1
(

)
),
,
(
(

]
[
2
2
t
x
u
J
k
j
(2.5)
where
O
is a Lagrange multiplier. A necessary condition for the extrema is that
the Gateaux derivative vanishes for all variations
])
1
,
0
([
2
L
G\
M
,
G
. This
can be expressed as
>
@
)
(
,
0
2
0
:
L
d
dJ
(2.6)
Consider now the Hilbert space of all pairs
g
f ,
where
f
and g are
functions of
]
1
,
0
[
2
L
, i.e., square integrable functions of the space variable
x
on
the interval
]
1
,
0
[
, where .,. denotes the standard inner product on
2
L
defined
by
dx
x
g
x
f
g
f
³
1
0
*
)
(
)
(
,
(2.7)
and
dx
³
:
!
2
2
,
(2.8)
where
: is the domain of interest over which
)
(x
u
and are defined and the
asterisk
(*)
denotes conjugate transpose.
It follows immediately from (2.6) that
@
@
u
u
u
u
d
dJ
d
dJ
,
,
,
Re
2
,
,
,
0
]
[
0
0
(2.9a)
where use has been made of the inner product properties.
1
Given a function to maximize,
)
(P
f
, subject to the constraint
0
)
(
P
g
, the Lagrange function
can be defined as
)
(
)
(
)
,
(
P
g
P
f
j
P
F
O
O
.
19
Noting from the commutability of the averaging operator and spatial
integral, that
0
)
(
)
(
'
)
'
(
)
'
(
)
(
)
(
)
(
'
)
'
(
)
'
(
)
(
)
(
,
,
,
*
1
0
1
0
*
1
0
*
1
0
*
1
0
*
»¼
º
«¬
ª
³ ³
³
³
³
dx
x
x
dx
x
x
u
x
u
dx
x
x
dx
x
u
x
dx
x
x
u
u
u
\
OM
M
\
M
O
M
\
\
M
O
M
\
(2,9b)
the condition for the extrema reduces to
)
(
'
)
'
(
)
(
)
(
*
x
dx
x
x
u
x
u
³
:
(2.10)
Equation (2.10) has a finite number of orthogonal solutions
)
(x
i
M
(the
proper orthogonal modes) with corresponding real and positive eigenvalues
i
O
.
They are consequently called empirical orthogonal functions.
Defining
'
)
'
(
)
,
'
(
)
,
(
*
dx
x
t
x
u
t
x
u
³
:
R
(2.11)
where
¦
N
i
k
k
t
x
u
t
x
u
N
x
x
R
1
)
,
'
(
)
,
(
1
)
'
,
(
(2.12)
The problem of minimizing (2.9) becomes that of finding the largest
eigenvalue of the eigenvalue problem
R
, subject to
1
2
.
In practice, the observations that form the data are only available at discrete
spatial grid points herein called snapshots. In this case, the kernel
)
'
,
(
x
x
R
can
be written in matrix notation as [56]
»
»
»
¼
º
«
«
«
¬
ª
)
,
(
)
,
(
)
,
(
)
,
(
1
1
1
1
n
n
n
n
x
x
R
x
x
R
x
x
R
x
x
R
#
%
#
)
x'
R(x,
where n indicates the number of measurement points, and
20
n
j
i
t
x
u
t
x
u
N
x
x
R
N
k
k
j
k
i
j
i
,...,
1
,
,
)
,
(
)
,
(
1
)
,
(
1
¦
(2.13)
In other words, the optimal basis is given by the eigenfunctions
i
M
of (2.13)
whose kernel is the autocorrelation function
)
,
'
(
)
,
(
)
'
,
(
k
j
k
j
t
x
u
t
x
u
x
x
R
.
2.3 Discrete Domain Representation
As highlighted in the previous section, time series are usually recorded in
discrete form even though the underlying process itself is continuous. For
discretely sampled measured data, the integral timeaverage can be
approximated by a sum over the set of sampled data points [1]. In this case, the
vectors
n
j
t
x
u
t
x
u
t
x
u
t
x
T
N
j
j
j
k
j
j
j
,...,
1
,
)]
,
(
...
)
,
(
)
,
(
[
)
,
(
2
1
u
x
(2.14)
represent a set of snapshots obtained from the observed data at
n
locations
}.
,...,
,
{
2
1
n
x
x
x
The set of data can then be written as the N×ndimension ensemble matrix,
X [6]
»
»
»
¼
º
«
«
«
¬
ª
)
,
(
)
,
(
)
,
(
)
,
(
]
[
1
1
1
1
1
N
n
N
n
n
t
x
u
t
x
u
t
x
u
t
x
u
#
%
#
x
x
X
(2.15)
where each column corresponds to the measured response at a specific time.
Typically,
N
n
z
, so X is generally rectangular. Under these assumptions,
the actual integral (2.10) can be written as
C
, where
¦
N
k
k
j
k
i
ij
t
x
u
t
x
u
N
1
)
,
(
)
,
(
1
C
.
Assuming the PODs to be of the form
¦
N
l
l
i
l
i
w
1
x
M
, where
i
l
w
is a
coefficient to be determined, the problem of minimizing (2.2) can be recast as
the problem of finding the largest eigenvalue of the linear equation
M
O
M
C
(2.16)
21
where
C
is the autocorrelation (covariance) matrix defined as
)
(
)
(
1
1
1
1
2
1
2
2
2
1
2
1
2
1
1
1
m
i
T
n
i
m
i
n
T
n
T
n
T
n
n
T
T
T
n
T
T
T
T
u
x
u
x
N
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
N
N
»
»
»
»
»
¼
º
«
«
«
«
«
¬
ª
¦
#
%
#
#
X
X
C
(2.17)
The resulting covariance matrix
C
, is a real, symmetric (
ji
ij
C
C
), positive
and semidefinite matrix. Consequently, it possesses a set of orthogonal
eigenvectors
n
i
i
,...,
1
,
M
, i.e.,
¯
®
z
j
i
j
i
ij
j
T
i
,
0
,
G
M
M
Using standard linear algebra techniques, the covariance matrix can be
expressed in the form
T
V
U
C
(2.18)
where
U
and
V
are the matrices of right and left eigenvectors and
]
[
diag
2
1
n
O
O
O
are the ordered eigenvalues of the correlation matrix,
n
t
t
2
1
.
The eigenvalues computed from (2.18) are real, nonnegative and can be
ordered such that
0
2
1
t
t
t
t
n
O
O
O
. The eigenvectors of
C
are called the
proper orthogonal modes (POMs), and the associated eigenvalues are called the
proper orthogonal values (POVs).
For practical applications, the number of snapshots
N
can be rather large,
leading to a very large eigenvalue problem. There are at least two methods to
solve the eigenvalue problem (2.16) [6]: the direct method and the method of
snapshots.
The direct method attempts to solve the eigenvalue problem involving the
n
N
u
matrix directly using standard numerical techniques. This can be
computationally intensive if the number of observations is larger than the
number of observing locations or grid points.
22
The method of snapshots, on the other hand, is based on the fact that the
data vectors
i
u
and the POD modes
l
span the same linear space. The latter is
explored here.
The next section describes in more detail the nature of the approximation
employed here to construct the statistical representation.
2.4 The Method of Snapshots
The method of snapshots is based on the fact that the data vectors
i
u
and the
POD modes span the same linear space [6]. In this approach, we choose the
eigenfunctions
M to be a linear combination of the snapshots:
¦
N
l
i
i
l
i
w
1
x (2.19)
where the coefficients
i
l
w
are to be determined such that
u
maximizes
¦
!
n
j
j
x
N
1
,
,
1
max
(2.20)
These l functions are assembled into an
N
n
u matrix, , known as the
modal matrix. Equation (2.19) in matrix form becomes
XW
(2.21)
where
»
»
»
¼
º
«
«
«
¬
ª
p
p
p
p
n
n
n
n
n
M
M
M
2
1
;
»
»
»
¼
º
«
«
«
¬
ª
p
p
p
p
n
n
n
n
n
x
x
x
2
1
X
;
»
»
»
¼
º
«
«
«
¬
ª
p
p
p
p
n
n
n
n
n
w
w
w
2
1
W
in which
»
»
»
»
»
¼
º
«
«
«
«
«
¬
ª
1
1
2
1
1
1
N
w
w
w
#
w
,
»
»
»
»
»
¼
º
«
«
«
«
«
¬
ª
2
2
2
2
1
2
N
w
w
w
#
w
, ...,
»
»
»
»
»
¼
º
«
«
«
«
«
¬
ª
n
N
n
n
n
w
w
w
#
2
1
w
23
Substituting the expression (2.19) into the eigenvalue value problem (2.16)
gives
¦
¦
N
l
i
i
l
N
l
i
i
l
w
w
1
1
x
x
C
(2.22)
where
)
,
(
)
/
1
(
j
i
ij
u
u
N
C
.
This can be written as the eigenvalue problem of dimension n
:
CW
(2.23)
where
@
n
w
w
w
2
1
w
and
is a diagonal matrix storing the eigenvalues
i
O
of the covariance matrix
C. In words, the firstorder necessary optimality condition for
M to provide a
maximum in (2.20) is given by (2.16). This completes the construction of the
orthogonal set
^
`
n
M
M
M
2
1
.
Once the modes are found using these equations, the flow field can be
reconstructed using a linear combination of the modes
¦
f
1
)
(
)
(
)
,
(
^
i
i
i
k
j
k
x
t
a
t
x
u
(2.24)
for some
)
(t
a
i
, where the
)
(t
a
i
are the timevarying amplitudes of the POD
modes
)
(x
i
.
The truncated POD of
u
is
x
t
a
t
x
u
p
i
i
i
k
j
k
¦
1
)
(
)
(
)
,
(
^
(2.25)
where p is the number of dominant modes and is an error function. Having
computed the relevant eigenmodes, the temporal behavior of each mode can be
evaluated as the inner product of the eigenmode (the POD mode,
i
) and the
24
original data
.
To ensure uniqueness of the solution, the normalization condition
of
i
i
M
M ,
=1 is imposed.
The temporal coefficients are then
expressed as
i
i
i
i
a
M
M
M
,
,
x
(2.26)
Note that the temporal modes are uncorrelated in time, i.e.,
j
ij
j
i
t
a
t
a
))
(
),
(
(
, where
1
ij
G
for
j
i
, 0
else, and that the system (2.25) is
optimal in the sense that minimizes the error function
¦
¦
p
l
p
i
i
i
k
j
x
t
a
t
x
u
1
1
)
(
)
(
)
,
(
)
(
M
M
H
(2.27)
It should also be stressed that no conditions are imposed on the data set: the
data can be a sample of a stationary process or a sample of a nonstationary
process.
Equation (2.24) is called the KarhunenLoève decomposition and the set
j
M
are called the empirical basis [5].
2.5 Energy Relationships
The use of the POD method leads naturally to a discussion of truncation
criteria. Several techniques to derive truncated expansions have been proposed
in the literature. Here, we choose to reduce the residual terms,
R
, such that
the mean square value
¸
¸
¸
¸
¹
·
¨
¨
¨
¨
©
§
¦
N
i
i
p
O
R
1
1
(2.28)
be as small as possible.
25
Defining the total energy E , by
)]
,
(
Trace[
m
i
E
x
x
R
, one obtains
¦
p
i
i
E
1
(2.29)
The associated percentage of total energy contributed by each mode can
then be expressed as
¦
n
i
i
i
i
E
1
(2.30)
Typically, the order p of the reduced basis
M such that a predetermined
level of the total energy E , of the ensemble of snapshots is captured, i.e., 99%.
The p dominant eigenfunctions are then obtained as
}
)
(
:
)
(
min{
arg
to
subjet
%
99
100
)
(
%
0
1
1
E
p
E
p
E
p
E
n
j
j
p
i
i
t
u
¦
¦
(2.31)
where
0
E is an appropriate energy level, and
100
)
(
%
0
d
d
p
E
is the percentage
of energy that is captured by the POD basis. By neglecting modes
corresponding to the small eigenvalues a reducedorder model can be
constructed. In Appendix B, an optimal linear predictor to determine the lower
integer p is proposed.
The key advantage of this technique is that allows extracting modal
information from short and often noisy time series without prior knowledge of
the dynamics affecting the time series.
2.6 Interpretation of EOFs Using Singular Value
Decomposition
A useful alternative method for estimating modal characteristics can be
developed based on the analysis of the response matrix X in (2.15).
26
Before outlining the procedure for singular value decomposition (SVD)
analysis, we introduce some background information on singular value
analysis.
2.6.1 Singular Value Decomposition
Let A be a real
n
m
u
matrix. The singular value decomposition theorem states
that A can be decomposed into the following form [7]
T
V
U
A
66
(2.32)
where
]
[
col
2
1
m
u
u
u
U
is an
m
m
u
orthonormal matrix (
1
U
V
T
), is
an
n
m
u
pseudodiagonal and semipositive definite matrix with diagonal
entries containing the singular values, and
]
[
col
2
1
n
v
v
v
V
is an
n
n
u
orthonormal matrix (
1
U
U
T
). The columns of
U
and
V
are called left and
right singular vectors for
A
.
Matrix has the form
n
m
m
»
»
»
»
¼
º
«
«
«
«
¬
ª
for
0
0
0
0
0
0
0
0
0
0
0
0
2
1
#
%
#
%
#
#
#
66
or
n
m
m
!
»
»
»
»
»
»
»
»
»
¼
º
«
«
«
«
«
«
«
«
«
¬
ª
for
0
0
0
0
0
0
0
0
0
0
0
0
2
1
#
%
#
#
#
%
#
#
66
Throughout this research, we will, consider only the case when
n
m
!
. The
diagonal entries of , i.e., the
i
ii
V
, can be arranged to be nonnegative and in
order of decreasing magnitude
0
2
1
t
t
t
t
m
V
V
V
.
27
Equivalently, we can express the model as
@
»
»
»
»
»
»
»
»
¼
º
«
«
«
«
«
«
«
«
¬
ª
»
»
»
»
¼
º
«
«
«
«
¬
ª
T
n
T
k
T
k
T
k
m
k
k
v
v
v
v
u
u
u
u
#
#
#
#
%
#
1
1
1
1
1
0
0
0
0
0
0
A
(2.33)
or
@
»
¼
º
«
¬
ª
»
¼
º
«
¬
ª
A
A
V
V
D
U
U
A
where D is the diagonal matrix of nonzero singular values, and
U
and
V
are
the matrices of left and right singular vectors, respectively, corresponding to the
nonzero singular values.
It is clear that only the first
r
of the
u
's and
v
'
s make any contribution to A
, and can be expressed as an outer product expansion
¦
r
i
T
i
i
i
1
)
( v
u
A
(2.34)
where the vectors
i
u
and
i
v
are the columns of the orthogonal matrices
U
and
V
, respectively. Techniques to compute the POMs based on singular value
decomposition are next discussed.
2.6.2 Relation with the Eigenvalue Decomposition
An interesting interpretation of the POD modes can be obtained from the
singular value analysis of the response matrix X .
Using the notation in section 2.2 let the response matrix be given by
»
»
»
¼
º
«
«
«
¬
ª
)
,
(
)
,
(
)
,
(
)
,
(
1
1
1
1
N
n
N
n
t
x
u
t
x
u
t
x
u
t
x
u
#
%
#
X
(2.35)
28
where the columns correspond to a response at time. The SVD of the response
matrix X may be written in compact form as
T
V
U
X
(2.36)
where
U
is an orthonormal
N
N
u
matrix whose columns are the left singular
vectors of X , is
n
N
u
matrix containing the singular values of X along the
main diagonal and zeros elsewhere, and
V
is an
n
n
u
orthogonal matrix whose
columns correspond to the right singular vectors of X . The response matrix, X ,
is complex and symmetric and possesses a set of orthogonal singular vectors
with positive singular values.
In terms of the notation above for SVD, it can be seen directly from (2.18)
that the correlation matrix defined previously is given by
T
T
T
U
U
V
U
V
U
XX
2
66
6
6
(2.37)
and
T
T
T
V
V
V
U
V
U
X
X
2
(2.38)
Hence, (2.36) becomes
»
»
»
»
»
¼
º
«
«
«
«
«
¬
ª
»
»
»
»
¼
º
«
«
«
«
¬
ª
2
2
2
2
1
2
1
n
n
T
%
%
U
U
U
XX
(2.39)
It follows immediately from (2.37)(2.39), that the singular values of X are
the square roots of the eigenvalues of
T
XX or
X
X
T
[2,8]. In addition, the left
and right eigenvectors of X are the eigenvectors of
T
XX and
X
X
T
, respectively.
Also of interest, the trace of (2.39) is given by
¦
r
i
i
T
1
2
XX
The POMs, defined as the eigenvectors of the sample correlation matrix
C
are thus equal to the left singular vectors of X . The POVs, defined as the
29
eigenvalues of matrix
C
are the squares of the singular values divided by the
number of samples
N
.
2.6.3 Geometric Interpretation Using SVD
The SVD of matrix X , seen as a collection of column vectors provides
important insight into the oriented energy distribution of this set of vectors.
The energy of the vector sequence
k
x
building a matrix
m
N
u
is defined by
the square root of sums of squares of all the elements (Frobenius norm)
¦
¦
¦
¦
N
j
r
k
k
r
k
k
m
n
j
n
F
m
N
r
N
E
1
1
1
2
1
2
)
(
2
.
)
,
min(
,
)
1
(
)
(
x
X
C
(2.40)
Thus, the first columns of the matrix
V
give an optimal orthonormal basis
for approximating the data [5]. So that the energy of a vector sequence is equal
to the energy in its singular spectrum
2
2
1
,
,
r
V
V
or alternative in its
eigenspectrum
r
,
,
1
. A key property of the SVD is that the extreme in this
oriented energy distribution occur at each left singular direction.
2.7 Relation Between POMs and Normal Modes
Significant physical insight into the physical interpretation of POMs can be
obtained from the study of mechanical systems.
In this section we review the theory behind the notion of normal modes in
mechanical systems and show that, under certain conditions, the POMs from
correlation analysis converge to the normal modes.
2.7.1 The Notion of Normal Modes
Consider an
m
degreeoffreedom nonlinear mechanical system. The equation
of motion describing the forced vibration of the system may be written as
follows:
,
f
y
y
(2.41)
30
where
is the mass matrix,
is the stiffness matrix, and
y is the
displacement vector. Using the transformation
x
y
2
1
, Eq. (2.41) takes the
form
,
^
f
x
x
(2.42)
where
2
1
2
1
. We also assume that
1
exists.
The general solution that represents the free natural response of the
dynamical systems (2.41) is [9]
,
)
(
^
)
(
)
(
1
2
0
t
t
t
i
0
f
x
e
f
x
(2.43)
where
)
(t
f
x
is a solution of the free motion
)
0
^
(
f
, and can be expressed in
terms of the normal modes
n
v
of the free motion
¦
¦
m
n
m
n
n
n
n
n
n
n
n
n
n
t
t
t
t
1
1
,
)
sin(
))
sin(
^
)
cos(
^
(
)
(
v
v
D
C
A
x
f
(2.44)
or,
,
)
(
)
(
1
¦
p
n
n
n
t
t
v
a
x
f
(2.45)
where
,
)
sin(
)
(
n
n
n
n
t
t
A
a
(2.46)
are the temporal amplitudes, and the
n
v
is the right eigenvector associated with
the
n
th system mode; the
n
A
are constants depending on the initial conditions.
The modal shapes are determined by the eigenvector
n
v
.
The function
)
sin(
n
n
t
operates simply as a mapping scale. The resulting
vibration modes of the oscillations are stationary and one thus refers to such
vibrations as stationary oscillations. Such a natural vibration mode repeats itself
with a period
n
n
2
T
, where
n
are the natural frequencies (in
s
rad
).
31
Since the determinant of the
m
m
u
matrix
)
(
2
0
is a polynomial of
degree
m
2
in
Z
it has
m
2
roots and therefore there are
m
2
normal modes.
Equation (2.45) can be equivalently expressed as
,
)
(
)
(
)
(
)
(
)
(
2
1
2
1
2
22
21
1
12
11
t
t
a
t
a
t
a
v
v
v
v
v
v
v
v
v
t
m
mm
m
m
m
m
a
x
f
v
»
»
»
»
¼
º
«
«
«
«
¬
ª
»
»
»
»
¼
º
«
«
«
«
¬
ª
#
#
%
#
#
(2.47)
where
@
m
v
v
v
v
2
1
is the modal matrix of
m
normal modes
n
v
and
@
T
m
t
a
t
a
t
a
t
)
(
)
(
)
(
)
(
2
1
a
is the vector of modal coordinates with elements
)
sin(
)
(
n
n
n
n
t
t
A
a
representing the time modulation of the natural modes
[10].
2.7.2 Physical Interpretation of POMs in Terms of the Eigenvalues
The POMs have a physical interpretation of interest in terms of the eigenvalues
of the correlation matrix. Let the response matrix,
, of the mechanical system
be expressed in the form
.
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
1
1
2
1
1
1
1
1
2
1
1
1
1
1
0
1
2
0
1
1
0
»
»
»
»
»
»
»
»
¼
º
«
«
«
«
«
«
«
«
¬
ª
¦
¦
¦
¦
¦
¦
¦
¦
¦
m
n
nm
N
n
m
n
n
N
n
m
n
n
N
n
m
n
nm
n
m
n
n
n
m
n
n
n
m
n
nm
n
m
n
n
n
m
n
n
n
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
#
%
#
#
(2.48)
Now defining the vectors
a
as
1
u
m
arrays of the functions
)
(t
a
n
evaluated
N
times, one can form an ensemble matrix
,
@
,
)
sin(
)
sin(
)
sin(
)
sin(
)
sin(
)
sin(
)
sin(
)
sin(
)
sin(
2
2
2
1
1
1
1
2
1
2
2
1
1
1
1
0
2
0
2
2
1
0
1
1
2
1
»
»
»
»
¼
º
«
«
«
«
¬
ª
m
N
m
m
N
N
m
m
m
m
m
m
m
t
t
t
t
t
t
t
t
t
A
A
A
A
A
A
A
A
A
a
a
a
#
%
#
#
(2.49)
from which it follows that
32
T
m
m
T
T
T
m
T
T
T
v
v
v
v
v
v
A
AV
X
a
a
a
2
2
1
1
2
1
]
[
, (2.50)
where the
n
v
are the column vectors formed by the natural modes of the
system. These are orthogonal to each other in the metrics of the mass and
stiffness matrices.
It thus follows that the correlation matrix
C
, may be written as
.
)
(
)
(
1
1
2
2
1
1
2
2
1
1
T
m
m
T
T
T
T
m
m
T
T
T
N
N
v
v
v
v
v
v
C
a
a
a
a
a
a
(2.51)
To verify that a modal vector is a POM, we postmultiply the matrix
C
by
n
v
to obtain
.
v
)
(
)
(
1
)
(
1
^
2
2
1
1
2
2
1
1
n
T
m
m
T
T
T
T
m
m
T
T
n
T
n
N
N
v
v
v
v
v
v
v
Cv
C
a
a
a
a
a
a
(2.52)
Using the orthogonality condition
1
n
T
n
v
v
for
k
n
, the above expression
simplifies to
@
.
1
^
1
1
m
k
T
m
k
k
T
k
k
T
N
v
v
v
C
a
a
a
a
a
a
(2.53)
Further, noting that
,
)
sin(
)
sin(
)
(
sin
,
)
sin(
)
sin(
1
1
2
2
2
1
1
¦¦
¦¦
m
l
m
k
k
n
l
n
k
n
k
k
l
k
k
k
k
T
k
m
l
m
k
k
n
l
n
k
n
n
k
T
n
t
t
t
N
t
t
A
A
A
A
A
A
A
A
a
a
A
A
a
a
v
v
(2.54)
Equation (2.53) can now be rewritten in the form
.
^
k
k
k
T
k
v
v
C
a
a
(2.55)
In such case, as the frequencies of the modes are distinct, each term
N
n
k
T
n
v
a
a
will tend to zero as
N
tends towards infinity, except for the term
N
k
k
T
k
v
a
a
which converges approximately to
k
v
[11]. In other words, the
eigenvectors (POMs) of
C
converge to the modal vector as expected.
33
In what follows we provide a similar interpretation of POMs in the
framework of singular value decomposition analysis.
2.7.3 Physical Interpretation of POMs Using Singular Value
Decomposition
Consider again the system (2.41). When the displacements are sampled
N
times, we can form a displacementhistory array. The observed field is then
represented by the data matrix:
@
,
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
^
^
^
^
2
1
2
2
2
2
1
1
1
2
1
1
2
1
»
»
»
»
¼
º
«
«
«
«
¬
ª
N
m
N
N
m
m
m
t
x
t
x
t
x
t
x
t
x
t
x
t
x
t
x
t
x
#
%
#
#
x
x
x
(2.56)
where
¦
m
n
n
n
k
k
t
a
v
x
1
)
(
)
(
^
^
^
.
From (2.56), it follows that the measurement matrix can be expressed in the
form
,
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
)
(
^
^
v^
1
1
2
1
1
1
2
1
2
2
1
1
2
1
1
1
2
1
1
1
1
T
m
n
mn
N
n
m
n
n
N
n
m
n
n
N
n
m
n
mn
n
m
n
n
n
m
n
n
n
m
n
mn
n
m
n
n
n
m
n
n
n
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
v
t
a
»
»
»
»
»
»
»
»
¼
º
«
«
«
«
«
«
«
«
¬
ª
¦
¦
¦
¦
¦
¦
¦
¦
¦
#
%
#
#
(2.57)
where
,
^
^
^
^
^
^
^
^
^
2
1
2
22
21
1
12
11
v^
»
»
»
»
¼
º
«
«
«
«
¬
ª
mm
m
m
m
m
v
v
v
v
v
v
v
v
v
#
%
#
#
(2.58)
is the modal matrix.
Defining
34
,
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
)
(
^
^
2
1
2
2
2
1
2
1
2
1
1
1
»
»
»
»
¼
º
«
«
«
«
¬
ª
N
m
m
m
N
N
t
a
t
a
t
a
t
a
t
a
t
a
t
a
t
a
t
a
#
%
#
#
(2.59)
the system (2.57) can be rewritten as
,
^
]
)
(
^
)
(
^
)
(
^
][
^
^
^
[
^
v^
2
1
2
1
T
m
m
t
t
t
v
v
v
a
a
a
(2.60)
Further insight into the nature of this model can be obtained by expressing
(2.60) in the form
,
]
^
][
[
^
v^
T
T
9
U
Q
(2.61)
with the orthogonality conditions
U
U
UU
*
*
, where
v^
is the modal matrix,
,
is an
)
(
m
m
u
identity matrix, is an
))
(
(
m
N
m
u
matrix full of zeros, and Q
is an
))
(
(
m
N
N
u
matrix.
We remark that matrix Q does not influence
^
since it is multiplied by a
matrix of zeros [8]. Note also that matrix Q affects only matrix
^
and that the
scaling not affect the coefficients of
^
.
In order for (2.60) to admit a decomposition of the form (2.61), matrices
v^
and
]
^
[
Q should be orthogonal.
According to (2.61), the response matrix may then be written as
,
]
)
^
(
^
][
)
^
(
[
]
^
^
^
[
^
1
2
1
T
n
n
m
diag
diag
Q
v
v
v
a
a
(2.62)
or, in extended form,
.
^
)
(
^
^
)
(
^
0
0
^
0
0
0
0
0
0
^
]
^
^
[
^
1
1
1
1
1
»
¼
º
«
¬
ª
»
»
»
¼
º
«
«
«
¬
ª
Q
v
v
a
a
a
a
a
a
t
t
m
m
m
#
%
(2.63)
The eigenvectors of
^
can now be selected such that the columns of the
third matrix on the right hand side of (2.63) are orthogonal.
35
From linear algebra, the columns of
1
^
^
n
diag
a
, will be orthogonal if and
only if
.
^
^
^
^
^
^
^
^
^
^
^
^
^
^
^
^
)
(
^
)
(
^
^
)
(
^
)
(
^
^
1
1
1
1
1
1
1
1
»
»
»
»
»
»
¼
º
«
«
«
«
«
«
¬
ª
»
»
¼
º
«
«
¬
ª
¸¸¹
·
¨¨©
§
»
»
¼
º
«
«
¬
ª
¸¸¹
·
¨¨©
§
m
m
m
T
m
m
T
m
m
m
T
T
n
n
T
n
n
t
t
diag
t
t
diag
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
#
%
#
(2.64)
Inspection of (2.64) reveals that each offdiagonal term can be expressed in
the form
°
°
°
°
°
¯
°
°
°
°
°
®
z
¸¸¹
·
¨¨©
§
¦
¦
¦
¦
¦
.
)
^
sin(
^
)
^
sin(
^
)
^
sin(
^
)
^
sin(
^
,
)
^
sin(
^
)
^
sin(
^
^
^
^
^
1
1
1
2
1
1
2
k
n
for
t
t
t
t
k
n
for
t
t
N
k
j
k
k
N
j
n
j
n
n
k
j
k
k
N
n
j
n
n
N
j
n
j
n
n
N
j
n
j
n
n
k
n
k
T
n
j
j
A
A
A
A
A
A
a
a
a
a
(2.65)
Thus, if the natural frequencies
n
Z
are distinct, it then follows that if the
columns of
1
^
^
n
diag
a
are orthogonal if we consider an infinite set of sampled
values. In particular, as
N
increases the offdiagonal terms vanish, except for
the diagonal terms.
Formally, we may write
.
,
,
0
^
^
^
^
k
n
to
N
f
k
k
n
n
z
f
o
o
i
a
a
a
a
(2.66)
Therefore, it can be concluded that POMs converge to the modal vectors of
an undamped and unforced linear system for
N
sufficiently large.
Once the columns of
1
^
^
n
diag
a
are made orthogonal, the columns of Q
can be computed in order that they are orthogonal to those of
1
^
^
n
diag
a
. The
analysis of more complicated situations follows along the same lines.
36
The following observations can be made about the singular value
decomposition of the response matrix
^
.
x The columns of
U
represent the eigenmodes of the system.
x The first
n
columns of
V
are normalized time modulation of the modes.
x For a finite length time series, the columns of
U
are modal vectors.
On this basis, a comprehensive numerical approach to extract dynamic
trends from measured data is proposed in Chapter 3.
2.8 References
[1] P. Holmes, J. L. Lumley, G. Berkooz, "Turbulence, coherent structures, dynamical
systems and symmetry," New York: Cambridge University Press, 1996.
[2] G. Kerschen, J. C. Galinval, Alexander F. Vakakis and Lawrence A. Bergman, "The
method of proper orthogonal decomposition for dynamical characterization and
order reduction of mechanical systems: An overview," Nonlinear Dynamics, vol. 41,
pp. 147169, 2005.
[3] S. S. Ravindran, "Reducedorder controllers for control of flow past an airfoil,"
International Journal for Numerical Methods in Fluid
, vol. 50, no. 5, pp. 531554, 2006.
[4] A. R. Messina, V. Vittal, "Extraction of dynamics pattern from wide area
measurements using empirical orthogonal functions," IEEE Transactions on Power
Systems,
vol. 22, no. 2, pp. 682692, 2007.
[5] G. Kerschen, J.C. Golinval, A. F. Vakakis, and L. A. Bergman, "The method of
proper orthogonal decomposition for dynamical characterization and order
reduction of mechanical systems: An overview," Nonlinear Dynamics, vol. 41, pp.
147169, 2005.
[6] L. Sirovich, "Turbulence and the dynamics of coherent structures part 1: coherent
structure," Quarterly of Applied Mathematics, vol. 95, no. 3, pp. 561571, Oct., 1986.
[7] D. Kalman, "A singularly valuable decomposition: the SVD of a matrix," The
College Mathematics Journal
, vol. 27, no. 1, pp. 223, Jan., 1996.
[8] G. Kerschen, and J. C. Golinval, "Physical interpretation of the proper orthogonal
modes using the singular value decomposition," Journal of Sound and Vibration, vol.
249, no. 5, pp. 849865, 2002.
[9] J. Argyris, and H. P. Mlejner, "Dynamical of structures," Text on Computational
Dynamics
, North Holland, Amsterdam, 1991.
37
[10] S. Han, "Application of proper orthogonal decomposition to structural vibration
analysis," Mechanical systems and signal processing, vol. 17, no. 5, pp. 9891001, 2003.
[11] B. F. Freeny, and R. Kappagantu, "On the physical interpretation of proper
orthogonal modes in vibrations," Journal of Sound and Vibration, vol. 211, no. 4, pp.
607616, 1998.
38
Chapter 3
Complex Empirical Orthogonal
Function
Many power system processes involve variability over both space and time.
These processes may contain moving patterns, travelling waves of different
spatial scales and temporal frequencies. The identification of dynamic
structures can be timeconsuming and difficult because of the large amount of
information that needs to be processed and compared.
The conventional empirical orthogonal eigenfunction analysis is primarily a
method of compressing time and space variability of a data set into the lowest
possible number of spatial patterns. Each of these patterns is composed of
standing modes of variability and modulated by a time function. As such, real
EOF analysis detects only standing waves, not travelling waves. The key point
to observe is that realEOF analysis cannot deal with propagating features and
it only uses spatial correlation of the field, it is necessary to use both spatial and
time information in order to identify such features.
In this chapter, we extend the conventional empirical orthogonal function
analysis to the study and detection of propagating features in nonlinear
processes. A new modelling technique that accurately describes the
fundamental dynamic of large data sets through the use of reducedorder
modelling is proposed.
This approach is ideally suited to the study of propagating and standing
features that can be associated with observed oscillations and is effective at
identifying and isolating nonstationary behavior.
39
3.1 Complex EOF Analysis
Empirical orthogonal function analysis of data fields is commonly carried out
under the assumption that each field can be represented as a spatially fixed
pattern of behavior. This method, however, cannot be used for detection of
propagating features because of the lack of phase information. To fully utilize
the data, a technique is needed that acknowledges the nonstationarity of the
timeseries data [111].
Let
)
,
(
k
j
t
x
u
be a spacetime scalar field representing a time series, where
j
x
,
n
j
,
,
1
, is a set of spatial variables on a space
k
:
, and
k
t
,
N
k
,
,
1
is the
time at which the observations are made.
Provided
u
is simple and square integrable, it has a Fourier representation
of the form [1]
¦
f
1
)
(
)
(
)]
sin(
)
(
)
cos(
)
(
[
)
,
(
m
m
j
m
j
j
t
m
b
t
m
a
t
x
u
(3.1)
where
)
(
)
(
Z
m
j
a
and
)
(
)
(
Z
m
j
b
are the Fourier coefficients defined as
³
³
)
(
)
(
sin
)
,
(
1
cos
)
,
(
1
d
t
m
t
x
u
b
d
t
m
t
x
u
a
j
m
j
j
m
j
This allows description of travelling waves propagating throughout the
system. Equation (3.1) can be rewritten in the form
¦
f
1
)
(
)
(
)
,
(
m
t
im
m
j
j
c
e
c
t
x
u
(3.2)
where
)
(
)
(
)
(
)
(
)
(
)
(
Z
Z
Z
m
j
m
j
m
j
ib
a
c
[1] and
1
i
is the unit complex number.
Expanding (3.2) and collecting terms gives
40
^
`
^
`
)
,
(
)
,
(
)]
sin(
)
(
)
cos(
)
(
[
)]
sin(
)
(
)
cos(
)
(
[
)
,
(
1
)
(
)
(
1
)
(
)
(
t
x
u
i
t
x
u
t
m
a
t
m
b
i
t
m
b
t
m
a
t
x
u
j
H
j
m
m
j
m
j
m
m
j
m
j
j
c
¦
¦
f
f
(3.3)
where the real part of
c
u
is given by (3.1) and the imaginary part is the Hilbert
transform of
)
,
(
t
x
u
j
.
In formal terms, the Hilbert transform of a continuous time series
)
,
(
t
x
u
j
is
defined by the convolution [12]
³
f
f
dy
y
t
y
u
t
x
u
j
H
)
(
1
)
,
(
, (3.4)
where the integral is taken to mean the Cauchy principal value.
The most wellknown classical methods for computing the Hilbert transform
are derived from the Fourier transform. However, this transform has a global
character and hence, it is not suitable for characterization of local signal
parameters. Alternatives for local implementation of the Hilbert transformation
which are based on local properties are developed and tested in this analysis.
For discretely sampled data, the Hilbert transform can be derived, in time
domain, by applying a rectangular rule to (3.4). It can be shown that
¦
f
f

k
j
H
k
k
t
u
t
x
u
1
2
)
)
1
2
(
(
2
)
,
(
W
S
, (3.5)
where
W
is the step size. When (3.5) is applied to a discrete time series
)
,
(
k
j
t
x
u
,
,
1
,
0
r
k
, one gets [8]
¦
¦
t
f
f
0
)]
1
2
(
)
1
2
(
[
1
2
1
2
1
2
)
1
2
(
2
)
,
(
k
k
k
j
H
k
t
u
k
t
u
k
k
k
t
u
t
x
u
. (3.6)
41
In previous formulations, the Hilbert transform was estimated by truncating
the series (3.6) [11]. This truncation was approximated using a convolution filter
as
¦
f
L
L
k
j
k
j
H
L
h
t
x
u
t
x
u
A
A
A
,
)
(
)
,
(
)
,
(
(3.7)
where
h
is a convolution filter with unit amplitude response and 90° phase
shift. In this research, it has been found that a simple filter that has the desired
properties of approximate unit amplitude response and /2 phase shift is given
by [13]
°¯
°
®
z
0
,
0
0
,
)
2
(
sin
2
)
(
2
l
l
l
l
l
h
(3.8)
where
L
l
L
d
d
. We omit the calculations.
As
f
o
L
, equation (3.7) yields an exact Hilbert transform. This represents a
filtering operation upon
)
,
(
t
x
u
j
in which the amplitude of each Fourier spectral
component remains unchanged while its phase is advanced by /2. Hannachi et
al. [8] found that
25
7
d
d L
provides adequate values for the filter response.
In what follows, we discuss the extension of the conventional EOF analysis
using the above approach to compute standing and propagating features of
general nonstationary processes where the eigenvectors of the covariance
matrix are complex and can be expressed alternatively as a magnitude and
phase pair.
3.2 Theoretical fundamentals of Empirical Orthogonal
Functions
The relationship between spatial and temporal behavior can be obtained by
noting that spatiotemporal information can be mapped into a time and space
grid. We suppose that we have a gridded data set composed of a spacetime
field,
)
,
( t
x
u
representing the value of the field u , at time
t
, and spatial position
42
x . Figure 3.1 shows a conceptual representation of spatial and temporal
variability captured in the energy field. Rows in this plot capture spatial
information, whilst columns capture temporal information.
The method of complex EOF analysis is an optimal technique of
biorthogonal decomposition to find a spatial and temporal basis that spans an
ensemble of data collected from experiments or numerical simulations [13].
The method essentially decomposes a fluctuating field into a weighted linear
sum of spatial orthogonal modes and temporal orthogonal modes such that the
projection onto the first few modes is optimal. Before introducing the method,
the theoretical background for the biorthogonal decomposition of spatial and
temporal variability is introduced.
The fundamental idea of EOF analysis is to find a basis
M for linear, infinite
dimensional Hilbert space
])
1
,
0
([
2
L
, that maximizes the averaged projection of a
response matrix, X , suitably normalized; the corresponding functional for this
constrained variational problem is solved and reduced in section 2.2 to
0
)
(
)
(
'
)
'
(
)
'
(
)
(
*
1
0
1
0
*
»
¼
º
«
¬
ª
³ ³
dx
x
x
dx
x
x
u
x
u
\
OM
M
(3.9)
where, if
0
)
(
*
x
\
, the optimal basis is given by the eigenfunctions
j
M
of the
integral equation
)
(
'
)
'
(
)
'
(
)
(
1
0
*
x
dx
x
x
u
x
u
OM
M
³
whose kernel is the averaged autocorrelation function
)
'
,
(
)
(
)
(
*
x
x
x
u
x
u
def
C
[4
8]. They are consequently called empirical orthogonal functions (EOFs). Now,
we assume that if
0
)
(
*
z
x
\
, then (3.9) can be rewritten as
dx
x
x
dx
dx
x
x
u
x
u
x
)
(
)
(
'
)
(
)
'
(
)
(
)
'
(
*
1
0
*
1
0
1
0
*
\
O
M
\
M
³
³³
(3.10)
such that the inner product
0
))
(
)
'
(
(
*
z
x
x
\
M
C
with orthogonal eigenvectors,
\
M, , i.e.,
43
¯
®
z
¯
®
z
j
i
j
i
j
i
j
i
ij
j
T
i
ij
j
T
i
,
0
),
(
and
,
0
),
(
\
G
\
\
M
G
M
M
From (3.10) it can be seen that if there exists an arbitrary variation (spatial),
0
)
(
*
z
x
\
, then the original field can be reconstructed using two optimal
orthogonal basis,
\
M, , given from (3.10).
a) Threedimensional view of energy distribution.
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
u [x
1,
x
2 ,
x
3,
...
x
n
]
t
1
t
N
.
.
.
Spatial
Energy
Temporal
Energy
...
...
...
...
...
...
...
...
...
...
...
...
...
b) Twodimensional representation.
Fig. 3. 1. Conceptual diagram illustrating the phenomenon of temporal and spatial energy in a
field.
44
Based on this notion, an efficient technique to find the optimal orthogonal
basis using complex EOF analysis is developed.
3.3 ComplexVector EOF Analysis
Drawing on the above approach, an efficient formulation to compute complex
timedependent POMs has been derived. Following Feeny et al. [3], assume that
I
R
c
iX
X
X
is a n (spatial points) by N (temporal points) ensemble complex
matrix. From the preceding results, it follows that
H
I
I
I
H
R
R
R
c
c
i
t
isin
t
cos
c
c
V
U
V
U
X
X
¦
¦
)]
(
)
(
[
(3.11)
where
c
and
c
X
T are the magnitude and phase of
c
,
H
R
V
and
H
I
V
are the
conjugate transpose of
R
V
and
I
V
respectively, the superscript H denotes a
Hermitian matrix and the subscripts
I
R
c
,
,
, indicate the complex, real and
imaginary vector.
Implicit in this model is the assumption that
I
V
V
I
V
V
I
U
U
I
U
U
I
H
I
R
H
R
I
H
I
R
H
R
,
,
From equation (2.17), the autocorrelation matrix
C
for the case complex data
can be written in the form
I
R
iC
C
C
. It can be easily verified that
)]}
(
)
(
)
(
)
(
[
)]
(
)
(
)
(
)
(
{[
)]
(
)
(
)][
(
)
(
[
)]}
(
)
(
)
(
)
(
[
)]
(
)
(
)
(
)
(
{[
)]
(
)
(
)][
(
)
(
[
t
cos
t
sin
t
sin
t
cos
i
t
sin
t
sin
t
cos
t
cos
t
isin
t
cos
t
isin
t
cos
t
sin
t
cos
t
cos
t
sin
i
t
sin
t
sin
t
cos
t
cos
t
isin
t
cos
t
isin
t
cos
c
T
c
c
T
c
c
T
c
c
T
c
c
c
T
c
T
c
T
c
c
T
c
c
T
c
c
T
c
c
T
c
T
c
c
c
c
T
c
c
T
c
c
H
c
T
c
c
T
c
c
H
c
c
X
X
X
X
X
X
X
X
X
X
X
X
(3.12)
which translates into
45
]
[
]
[
]
[
]
[
]
[
]
[
]
[
]
[
R
R
T
I
T
I
I
I
T
R
T
R
I
I
T
I
T
I
R
R
T
R
T
R
R
T
I
I
T
R
I
T
I
R
T
R
c
H
c
T
I
T
I
R
R
T
R
T
R
I
I
T
I
T
I
I
I
T
R
T
R
R
R
T
I
R
T
R
I
T
I
I
T
R
R
H
c
c
i
i
i
i
V
V
V
V
V
V
V
V
U
U
U
U
U
U
U
U
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
¦
(3.13)
where,
T
R
and
T
I
denotes the transpose of
R
and
I
, respectively.
As is apparent from (3.11), the columns of
R
U
and
I
U
are the eigenvectors
of the real and imaginary parts of
H
c
c
, and the columns of
R
V
and
I
V
are the
eigenvectors of real and imaginary parts of
c
H
c
, respectively. The n singular
values on the diagonal of
R
and
I
are the square roots of the nonzero
eigenvalues of the real and imaginary parts of both
H
c
c
and
c
H
c
divided
by the number of samples
N
, where n is the rank of the complex matrix
c
X
.
It can be seen from (3.12)(3.13) that the autocovariance matrix
I
R
iC
C
C
,
where the real part is a symmetrical matrix, that is to say,
7
R
R
C
C
and
I
C
is a
skewsymmetric matrix i.e. say,
I
I
C
C
7
. If the size of
I
C
is odd then the
determinant of
I
C
will always be zero. Because the symmetrical matrix is a
particular case of the Hermitian matrix, then all its eigenvectors are real.
Further, the eigenvalues of the skewsymmetric matrix are all imaginary pure
and is a normal matrix; its eigenvectors are complex conjugates.
From the decomposition given in (3.12) can be seen that the imaginary part
is zero when time is in phase with the extremum of the cosine or sine, that is,
the sum of the two components is zero; at this time instant both are symmetrical
matrices. The imaginary part measures the degree of asymmetry when the sum
of both matrices is different from zero; this is used to define the existence of
arbitrary variations into the space,
0
)
(
*
z
x
\
; this feature is used to define the
existence of travelling wave components.
We also remark from the eigenvalue decomposition in (3.13) that the basis of
the empirical orthogonal functions are defined as
)
(x
R
to the real part of the
field complex, and
)
(x
I
as the imaginary part; the approximation to the
original field is written as the sum of pure standing and travelling wave
components denoted by the subscripts
swc
and
twc
, as
46
twc
swc
c
X
X
X
(3.14)
In an attempt to address these issues in the context of EOF analysis, an
analytical framework that treats both standing and travelling features is next
introduced.
3.4 Analysis of Propagating Features
The timedependent complex coefficients associated with each eigenfunction
can be conveniently split into their amplitudes and phases.
Following the notation defined earlier, the ensemble of data can be
expressed as the complex expansion
)
(
)
(
)
(
)
(
^
1
)
(
)
(
1
)
(
)
(
x
t
i
x
t
q
j
j
I
j
I
p
j
j
R
j
R
c
¦
¦
+
+
A
A
X
(3.15)
where
)
(
)
(
t
j
R
A
and
)
(
)
(
t
j
I
A
are obtained as the projection of the basis
)
(
)
(
x
j
R
and
)
(
)
(
x
j
I
respectively into complex field
c
X
of the form
)
(
)
(
)
(
)
(
)
(
)
(
x
x
j
I
c
j
I
j
R
c
j
R
X
A
X
A
(3.16)
The timedependent complex coefficients associated with each eigenfunction
can be conveniently split into their amplitudes and phases. From complex EOF
analysis in (3.15), the ensemble of data can be expressed as [5,1215]
)
2
(
)
(
)
2
(
)
(
)
(
)
(
)
(
)
(
)
(
1
)
(
)
(
)
(
)
(
1
)
(
x
t
x
t
j
I
j
I
j
I
q
j
j
I
j
R
j
R
j
R
p
j
j
R
c
¦
¦
I
I
S
R
S
R
X
,(3.17)
where
)
(
)
(
t
j
R
R
,
)
(
)
(
t
j
I
R
are the temporal amplitude function, and
)
(
)
(
x
j
R
S
, and
)
(
)
(
x
j
I
S
are the spatial amplitude functions associated with the wave
decomposition. This effectively decomposes the data into spatial and temporal
modes.
47
Four measures that define moving features in
)
,
( t
x
u
can then be defined:
1. Spatial distribution of variability of each eigenmode
2. Relative phase fluctuation
3. Temporal variability in magnitude
4. Variability of the phase of a particular oscillation
The succeeding sections describe the properties of these representations.
3.4.1 Spatial Amplitude Function,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
S
S
The spatial amplitude functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
S
S
, are defined as
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
t
t
t
t
t
t
j
I
j
I
j
I
j
R
j
R
j
R
S
S
+
+
(3.18)
This function shows the spatial distribution of variability associated with
each eigenmode.
3.4.2 Spatial Phase Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
I
I
This function shows the relative phase fluctuation among the various spatial
locations where u is defined and is given by
°¿
°
¾
½
°¯
°
®
°¿
°
¾
½
°¯
°
®
)
(
[
)]
(
[
tan
)
(
)
(
[
)]
(
[
tan
)
(
)
(
)
(
1
)
(
)
(
)
(
1
)
(
t
re
t
im
t
t
re
t
im
t
j
I
j
I
j
I
j
R
j
R
j
R
I
I
(3.19)
This measure, for which an arbitrary reference value must be selected, varies
continuously between 0 and
S
2 . Together these equations give a measure of
the space distribution of energy and can be used to identify the dominant
modes and their phase relationships. Further, for each dominant mode of
interest, a mode shape can be computed by using the spatial part of (3.15).
3.4.3 Temporal Amplitude Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
R
R
Similar to the description of the spatial amplitude function, the temporal
amplitude functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
R
R
, can be defined as
48
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
t
t
t
t
t
t
j
I
j
I
j
I
j
R
j
R
j
R
A
A
R
A
A
R
+
+
(3.20)
These functions give a measure of the temporal variability in the magnitude
of the modal structure of the field,
u
. As described later, the general form of
these equations is very amenable to computational analysis.
3.4.4 Temporal Phase Functions,
)
(
),
(
)
(
)
(
t
t
j
I
j
R
This temporal variation of phase associated with
)
,
(
t
x
u
is given by
°¿
°
¾
½
°¯
°
®
°¿
°
¾
½
°¯
°
®
)
(
[
)]
(
[
tan
)
(
)
(
[
)]
(
[
tan
)
(
)
(
)
(
1
)
(
)
(
)
(
1
)
(
t
re
t
im
t
t
re
t
im
t
j
I
j
I
j
I
j
R
j
R
j
R
A
A
A
A
(3.21)
For the simple case of a sinusoidal wave with fixed frequency and wave
number,
t
t
i
Z
)
(
. In the more general (and interesting case), the derivative of
the phase and frequency of the modal components can be calculated from
i
i
i
i
i
i
i
k
dt
d
dx
d
k
c
I
(3.22)
where
i
c
is the phase velocity of the function.
Equations (3.15)(3.22) provide a complete characterization of any
propagating effects and periodicity in the original data field which might be
obscured by standard crossspectral analysis. Finally, it might be remarked that,
in the special case of real analysis, these expressions simplify to the normal
definitions. We now turn our attention to the analysis of spatiotemporal
behavior.
49
3.4.5 SpatioTemporal Analysis
In order to investigate travelling features, let Equation (3.17) be expressed as
)
(
)
(
1
)
(
)
(
)
(
1
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
i
j
I
q
j
j
I
i
j
R
p
j
j
R
c
j
I
j
I
j
R
j
R
e
x
t
e
x
t
¦
¦
II
I
S
R
S
R
X
(3.23)
where the real physical field is reconstructed by taking the real part of the
before complex expansion,
)
cos(
)
(
)
(
)
cos(
)
(
)
(
)
(
)
(
)
(
1
)
(
)
(
)
(
)
(
1
)
(
x
t
x
t
j
I
j
I
j
I
q
j
j
I
j
R
j
R
j
R
p
j
j
R
¦
¦
II
I
S
R
S
R
X
(3.24)
This equations defines how the dynamical characterization can provide
information on the variability of the j th mode into original field as [1517]
)
cos(
)
(
)
(
)
cos(
)
(
)
(
)
(
)
(
)
(
1
)
(
)
(
)
(
)
(
1
)
(
x
t
x
t
x
t
x
t
j
I
j
I
j
I
q
j
j
I
j
R
j
R
j
R
p
j
j
R
¦
¦
S
R
S
R
X
(3.25)
In terms of the complex representation, the wave components are given by
[5]:
1. Wave number
)
( j
R
,
)
( j
I
. The wave numbers are given by:
dx
d
dx
d
j
I
j
I
j
R
j
R
)
(
0
)
(
)
(
)
(
)
(
)
(
I
I
(3.26)
where units are in
1
.
m
rad
, where
0
)
(
j
R
. Since
)
( j
R
I
takes values from 0 or
;
)
( j
R
I
is the angle of the base function given from the eigenvectors of the
matrix
R
C
obtained in (3.13). Given that
R
C
is a symmetrical matrix, all of its
eigenvalues are real; it can then be seen from (3.25) that in the first term, all
parts represent stationary wave modes, whereas in the second term, they
represent travelling wave modes. The wave number is only defined for
travelling waves; therefore (3.25) can be rewritten as
50
)
cos(
)
(
)
(
)
cos(
)
(
)
(
)
(
)
(
)
(
1
)
(
)
(
)
(
1
)
(
x
t
x
t
t
x
t
j
I
j
I
j
I
q
j
j
I
j
R
j
R
p
j
j
R
¦
¦
S
R
S
R
X
(3.27)
2. Angular frequencies
)
( j
R
,
)
( j
I
. Represent the angular frequency in
s
rad /
and are defined as
dt
d
dt
d
j
I
j
I
j
R
j
R
)
(
)
(
)
(
)
(
)
(
)
(
(3.28)
3. Average phase speeds
)
( j
R
c
,
)
( j
I
c
. They are computed from
)
(
)
(
)
(
)
(
)
(
)
(
0
j
I
j
I
j
I
j
R
j
R
j
R
c
c
(3.29)
physical units are
s
m / .
These equations provide a complete characterization of any propagating
features and periodicity in the original data field.
From (3.17) and (3.27) it can be seen that the term associated with the j th
travelling wave can be expressed in the form
)]}
2
/
(
)
2
/
{cos[(
)
cos(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
x
t
x
t
j
I
j
I
j
I
j
I
j
I
j
I
j
I
j
I
S
R
S
R
(3.30)
Using the trigonometric identities
)
sin(
)
sin(
)
cos(
)
cos(
)
cos(
B
A
B
A
B
A
and
)
sin(
)
cos(
)
cos(
)
sin(
)
sin(
B
A
B
A
B
A
, where
)
2
/
(
)
(
S
t
A
j
I
and
)]
2
/
(
)
(
x
B
j
I
, and noting that
)
cos(
)
2
/
sin(
)
cos(
)
2
/
sin(
)
sin(
)
2
/
cos(
)
sin(
)
2
/
cos(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
x
x
t
t
x
x
t
t
j
I
j
I
j
I
j
I
j
I
j
I
j
I
j
I
S
S
then (3.30) can be reexpressed in the form
51
)}
cos(
)
cos(
)
sin(
)
{sin(
)
cos(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
)
(
x
t
x
t
x
t
j
I
j
I
j
I
j
I
j
I
j
I
j
I
j
I
j
I
j
I
S
R
S
R
.(3.31)
From (3.31), it can be seen that the travelling wave components are also
identified as the sum of two standing wave components. To obtain the
decomposition of the original field in its pure standing wave components, it is
necessary to compute the difference with the pure travelling wave components
of the form
)
cos(
)
(
)
(
)
(
)
(
1
)
(
t
x
t
j
R
j
R
p
j
j
R
twc
swc
S
R
X

X
X
¦
(3.32)
where
swc
X
represents the decomposition of the original field given by the pure
standing wave components, and
)
cos(
)
(
)
(
)
(
)
(
)
(
1
)
(
x
t
x
t
j
I
j
I
j
I
q
j
j
I
twc
¦
S
R
X
, (3.33)
represents the decomposition of the original field given by the pure travelling
wave components, where the damping of each mode is given by its amplitude.
A practical criterion for choosing the number of relevant modes is given by
the energy percentage contained in the p and q modes of the form
%
99
100
2
1
)
(
1
)
(
u
¦
¦
F
q
j
twc
j
p
j
swc
j
X
(3.34)
where
2
.
F
denotes the Frobenius norm.
Equations (3.27) through (3.34) provide a complete characterization of any
propagating effects and periodicity in the original data field which might be
obscured by normal crossspectral analysis. Finally, it might be remarked that,
in the special case of real analysis, these expressions are simplified to the
normal definitions.
Later, we discuss refinements to the above model, so that travelling features
can be accurate characterized using realtime information.
52
3.4.6 Motivating Examples: Modeling of Propagating Waves Using
the Covariance Matrix
In an effort to better understand the mechanism of wave propagation using
complex EOF analysis we consider, as first example, a nondispersive plane
wave.
For simplicity, consider a nondispersive plane wave propagating at phase
speed
c
and wavenumber defined as
c
k
/
, past an array of sensors at
positions j given by [5]
¦
j
j
j
t
kx
t
kx
t
u
)]
sin(
)
(
)
cos(
)
(
[
)
(
(3.35)
Expanding (3.35) we have
¦
Z
Z
Z
E
Z
D
Z
Z
E
Z
D
)}
sin(
)]
cos(
)
(
)
sin(
)
(
[
)
cos(
)]
sin(
)
(
)
cos(
)
(
{[
)
(
t
kx
kx
t
kx
kx
t
u
j
j
j
j
j
(3.36)
Now, using the identities
)
cos(
)
(
)
sin(
)
(
)
(
)
sin(
)
(
)
cos(
)
(
)
(
j
j
j
j
j
j
kx
kx
b
kx
kx
a
(3.37)
(3.35) can be rewritten as
^
`
¦
j
j
j
t
b
t
a
t
u
)
sin(
)
(
)
cos(
)
(
)
(
(3.38)
Where, for this example,
u
is a white, bandlimited signal given by
¯
®
!
d
d
$
1
2
2
1
2
2
2
,
,
0
,
)
(
)
(
if
if
(3.39)
To obtain phase information between stations, a complex representation of
(3.38) is invoked:
^
`
¦
t
i
j
j
j
e
ib
a
t
u
)]
(
)
(
[
)
(
(3.40)
The covariance matrix of (3.40) is given as
53
t
k
j
jk
t
u
t
u
)
(
)
(
C
*
(3.41)
where
t
denotes time averaging and the asterisk complex conjugation.
Substations (3.40) onto (3.41) and expanding gives
^
` ^
`
^
`
^
`
¦
¦
¦
¦
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
)]
(
)
(
)
(
)
(
[
)]
(
)
(
)
(
)
(
[
,
)]
(
)
(
[
*
)]
(
)
(
[
)]
(
)
(
[
*
)]
(
)
(
[
C
k
j
k
j
k
j
k
j
k
k
j
j
t
i
k
k
t
i
j
j
jk
a
b
b
a
i
b
b
a
a
ib
a
ib
a
e
ib
a
e
ib
a
(3.42)
Using (3.37) and simplifying, (3.42) reduces to
¦
Z
Z
E
Z
D
Z
E
Z
D
)]}
sin(
)
(
)
sin(
)
(
[
)]
cos(
)
(
)
cos(
)
(
{[
C
2
2
2
2
j
k
j
k
k
j
k
j
jk
kx
kx
kx
kx
i
kx
kx
kx
kx
(3.43)
For calculation purposes it is better to use the exponential form. Using the
condition give in (3.39),
C
can be rewritten as
^
`
¦
¦
'
$
$
2
1
)
(
2
2
)
sin(
)
cos(
C
x
ik
k
j
k
j
jk
jk
e
kx
kx
i
kx
kx
(3.44)
where
k
j
jk
x
x
x
'
.
Replacing the summation with an integral in (3.44) and multiplying by
'
'
, yields
7
'
'
$
³
'
d
e
x
ik
jk
jk
2
to
,
C
2
1
)
(
2
(3.45)
This equation is amenable to integration by parts. Defining
)
(
)
(
)]
(
[
)
(
,
)
(
d
d
k
d
x
i
d
x
k
i
jk
jk
¿
¾
½
¯
®
'
'
(3.46)
where
)
(
d
is given by
54
)
(
)]
(
[
)
(
)
(
)]
(
[
)
(
)
(
d
k
d
x
i
d
d
k
d
x
i
d
d
jk
jk
¸
¸
¹
·
¨
¨
©
§
'
'
(3.47)
and substituting (3.46) into (3.45), one has
)
(
)]
(
[
)
(
2
C
2
1
2
d
e
k
d
x
i
d
jk
jk
³
¸
¸
¹
·
¨
¨
©
§
'
7
$
(3.48)
Integrating (3.48) by parts, and using the above results we obtain
@
2
1
)]
(
[
)
(
2
C
2
jk
jk
e
k
d
x
i
d
¸
¸
¹
·
¨
¨
©
§
'
7
$
(3.49)
which gives
@
@
jk
jk
jk
x
ik
x
ik
jk
x
ik
jk
jk
e
e
k
d
x
i
d
e
k
d
x
i
d
'
'
'
¸
¸
¹
·
¨
¨
©
§
'
7
$
¸
¸
¹
·
¨
¨
©
§
'
7
$
)
(
)
(
2
)
(
2
2
1
2
1
)]
(
[
)
(
2
,
)]
(
[
)
(
2
C
(3.50)
After some algebra, we can show that
»
»
»
¼
º
«
«
«
¬
ª
¸
¸
¹
·
¨
¨
©
§
'
7
$
'
'
'
'
'
'
'
'
2
)
(
2
)
(
2
)
(
2
)
(
2
)
(
2
)
(
2
)
(
2
)
(
2
1
1
2
2
2
2
1
1
)]