Automation Theory Defined by Systems and Processes


Wissenschaftlicher Aufsatz, 2017

29 Seiten


Leseprobe


2
1. Definition of Systems and Processes
Automation systems are the most general systems known in engineering, since they
couple the management of matter, energy, and information in space, time, and
causality. Indeed, such systems define entire production processes where materials
and processed, transported, and stored. The production processes of materials
require (mostly electrical) energy needed for the operating machines, such that
energy has also to be transformed, transported, and stored. The machines are
controlled by computers, such that information flows are also present, implying that
information has also to be processed, communicated among the operating machines,
and stored.
In order to formalize the description of such automation processes we will define a
system and a process in a deductive manner in this chapter. This definition will
appear astonishing at this step, but will be clarified in the following chapters
explaining what information and causality are, and how they behave in physics
together with matter, energy, space, and time.
Definition 1:
System
: A system is a ten dimensional vector consisting of 3
dimensions of space,
( , , ), 3 complementary dimensions of space given by the
overall momenta (
( ,
), ( ,
), ( ,
)), 1 dimension of time ( ), 1
complementary dimension of time given by the energy
( ), 1 dimension of causality
( ), and 1 complementary dimension of causality given by the information ( ):
=
( ,
)
( ,
)
( ,
)
Definition 2:
Process
: A process is a nine dimensional entity consisting of a
3 × 3
matrix:

3
=
such that each element of the matrix defines one dimension.
2. The Structure of Automation Systems
As already stated, automation systems operate on matter, energy, and information.
Matter is defined in physics by momenta derived from forces, and by angular
momenta derived from torques. A force is, generally spoken, a vector function of
space and time [1]. We have:
( , )
describing the magnitude and direction of the force vector, describing a vector in
three-dimensional space
( , , ), and describing a point in time. A torque ( , ) =
× ( , ) , (with ( , ) as force, × as cross product, and as position vector for the
force) [1] is, generally spoken, a vector function of angle and time . We have:
( , )
describing the magnitude and direction of the torque vector, describing an angle
vector in three-dimensional space
( , , ), and describing a point in time.
We can integrate the force in time in order to obtain the momentum [1]:
( , )
= ( )
The momentum is a vector function which is dependent of space but not longer of
time , since the time dependence has disappeared by integrating the force in the
time interval
=
- .

4
One fundamental principle of physics is that the momentum is conserved in a
closed system. This can be derived from Newton's laws of motion [1].
Hence, in a closed system we have [1]:
=
(
)
describing the momentum before the interaction, and after the interaction.
We can integrate the torque in time in order to obtain the angular momentum [1]:
( , )
= ( )
The angular momentum is a vector function which is dependent of the angle but
not longer of time , since the time dependence has disappeared by integrating the
torque in the time interval
=
- .
One fundamental principle of physics is that the angular momentum is conserved in
a closed system. This can be derived from Newton's laws of motion [1].
Hence, in a closed system we have [1]:
=
(
)
describing the angular momentum before the interaction, and
after the
interaction.
Energy is defined in physics by integrating forces and torques in time. Hence, we can
integrate the force in space and the torque in the angle in order to obtain the
energy [1]:
( , )
+
( , )
= ( )

5
The energy is a scalar function which is dependent of time but not longer of
space and on the angle , since the space dependence has disappeared by
integrating the force in the space interval
=
- , and since the dependence
on the angle has disappeared by integrating the torque in the angle interval
=
-
.
One fundamental principle of physics is that the energy is conserved [1]. Hence,
we always have
=
(
)
describing the energy before the interaction, and after the interaction.
Information can be defined in physics in the following manner: We can integrate the
force in space and time and the torque in the angle and in time, or we can integrate
the momentum in space and the angular momentum in the angle , or we can
integrate the energy in time to obtain the information [4]:
( , )
+
( , )
=
( )
+
( )
=
( )
=
The information is a scalar which is neither dependent on space nor on time, since
the space dependence has disappeared by integrating the force in the space
interval
=
- and the torque in the angle interval
=
-
, and since
the time dependence has disappeared by integrating the force and the torque in
the time interval
=
- .

6
Hence, information is described by a scalar (by a number) and not by a function,
like the force, momentum, angular momentum, or energy [4].
summarizes the momentum of the space interval
=
- , the angular
momentum of the angle interval
=
-
, and the energy of the time interval
=
- in an index .
The information is defined by the units [4]:
= =
=
= =
=
.
Corollary 1: Hence, we have defined the information in a physical manner by using
a force and torque applied to a particle/body, and have connected this information to
bits used in information technology [4].
Corollary 2: Since information is not defined as a function of space and/or time but
by a number, we cannot formulate a law of conservation of information at this stage,
since we cannot speak of time or space before and after the interaction. Instead,
summarizes the momentum, the angular momentum, and the energy of the
interaction process and represents the result as a number defining bits [4].
In physics, we can use five universal constants (the Planck constants) in order to
define units of measurement [2].
The gravitational constant, ; the speed of light in vacuum, ; the Planck constant,
;
the Coulomb constant,
; the Boltzmann constant
. Since we are interested in
the force, torque, momentum, angular momentum, energy, and information, but not in
electrical and thermodynamic processes, we focus on the first three Planck units.
The gravitational constant, , is important when defining the force between two
particles/bodies [1] in Newtonian mechanics:
=
(1):

7
describing the mass of the first particle/body;
describing the mass of the
second particle/body; describing the radius between the first and second
particle/body;
describing a dimensionless unit vector in the direction of the line
connecting
and
. The Force applied to matter is the key element in the
introduction of momentum, angular momentum, energy, and information. Hence,
equation (1) shall be defined [4] the matter equation.
The speed of light in vacuum, , is important when defining the relationship between
matter (given by the mass
of the resting particle) and energy
of the resting
particle [1]:
=
(2):
Hence, equation (2) shall be defined [4] the energy equation.
The Planck constant,
, is important when defining the relationship between energy,
, and frequency/time [1]:
= (3):
defines the energy of the particle/wave and defines its frequency. We have
= ,
with being the period of the wave. The Planck constant,
, has the same dimension
as
information:
= = =
=
= =
.
Indeed,
the
equations
known as Heisenberg's uncertainty principle [1] state that the information defined by
or
is at least
. Hence, is the smallest possible information. Hence, we
shall call equation (3) as [4] the information equation.
As already mentioned in corollary 2, summarizes the momentum, angular
momentum, and the energy of the interaction process and represents the result as a

8
number defining bits. This fact can be easily depicted by Petri nets. Petri nets are
defined in [3]. A possible Petri net is shown in figure 1.
Figure 1: Petri Net
The circles
- are called places and represent the state of a system. We can
identify the places with information as described above. Hence, the places
-
identify four information pieces
- [4].
The rectangle is called transition and represents, together with the arrows, the
causal relationship between the Information pieces.
Corollary 3: Hence, Petri nets are causal nets representing the causal relationship of
information pieces [4].
Corollary 4: Information as defined above cannot be shown in space and/or time,
since said information is not a function of space and/or time. But said information can
be shown in a causal net [4].

9
It is a well known principle of quantum theory that a measurement disturbs the
system which is measured [1]. Such an example is shown in figure 2 [4].
Figure 2: Physical Measurement
The triangle represents a photon used to perform a measurement, and the circle
represents an electron which is measured by the photon . The photon starts at
time , arrives at time at the electron, interacts with the electron from time to
time , is scattered from the electron at time , and arrives at the observer at time
. The same applies for the spatial coordinates , of course, and needs not to be
repeated [4].
The interaction between electron and photon is guided, of course, by the basic laws
of physics like the law of conservation of energy, and the law of conservation of
momentum and angular momentum (see the discussion above).
We have before the interaction [4], hence before the time :
=
+

10
describing the energy of the photon before the interaction, and describing the
energy of the electron before the interaction.
We have after the interaction [4], hence after the time :
=
+
describing the energy of the photon after the interaction, and
describing the
energy of the electron after the interaction.
According to the law of conservation of energy we have:
+
=
+
()
The energy
+
is transformed during the interaction (hence between and )
by an energy flow to the energy
+
; and the energy
+
is set up during the
interaction (hence between and ) by said energy flow from the energy
+
.
Hence, we can integrate equation
() in time for the time period of interaction (from
to ), and obtain [4]:
+
=
+
()
Equation
() can be written as [4]:
+
=
+ (
)
Remark
: We could have derived the law of conservation of information also by
integrating the momentum in space and the angular momentum in the angle , and
by using the law of conservation of momentum and angular momentum. We could
also have derived the law of conservation of information by integrating the force in
space and time and the torque in the angle and in time, and by using the law of
conservation of momentum, angular momentum, and energy [4].

11
Corollary 5: The validity of the laws for conservation of energy, momentum, and
angular momentum leads to the law of conservation of information. No information
gets lost during a measurement [4].
Hence, we have four conservation laws [4]:
=
(
)
=
(
)
=
(
)
= (
)
Corollary 6: Whereas the laws of conservation of energy, momentum, and angular
momentum can be directly observed in the local reference frame of the interacting
particles, the law of conservation of information can only be observed during a
measurement by using the local reference frame of the particles and the local
reference frame of the observer [4].
Corollary 7: According to Albert Einsteins's special theory of relativity the reference
frames of particles and observer are connected by a Lorentz transformation [1].
Hence, space and time get transformed from one reference frame to another, and the
momentum, angular momentum, and energy also get transformed between both
reference frames [1]. Due to
() and () the information gets transformed between
both reference frames connected by a Lorentz transformation [4]. It will be shown in
corollary 9 below that is an invariant with respect to Lorentz transformations.
Corollary 8: Information cannot be observed in a spacetime diagram like
momentum, angular momentum, and energy, since is not a function of spacetime.
But information can be observed in a causal net. Since the causal net consists of two

12
dimensions (places and transitions) [3], the observation of information adds two new
dimensions [4].
Contrary to Newtonian mechanics which does not allow gravitational waves,
electromagnetic theory allows electromagnetic waves [1]. We have the following
equation showing the propagation of an electromagnetic wave
[1]:
( , ) =
(
)
with as the position vector of the wave in three dimensional space, as the time
coordinate of the wave, as the amplitude of the wave, as the imaginary unit
(
= -1) of the complex numbers (), as the wave vector in three dimensional
space (not to be confused with the causality axis described below), showing in
the direction of the propagation of the wave, and as the angular frequency of the
wave.
The wave vector defines a measure for the momentum and the angular momentum
of the wave [1]. The angular frequency defines a measure for the energy of the
wave [1].
Corollary 5.1: Hence, we conclude that the electromagnetic wave carries
momentum, angular momentum, and energy during the wave propagation process.
Corollary 6.1: We further conclude that the electromagnetic wave does not carry
information during the wave propagation process, since information requires the
definition of a spatial range
=
- , of an angular range
=
-
, and of a
temporal range
=
- which are not present in the equation of the propagation
of the wave.
In order to use the spatial, angular, and temporal ranges of corollary 6.1,
measurements must be defined, such that the wave interacts with measuring
waves/particles in said spatial, angular, and temporal ranges.

13
Corollary 6.2: Without measurement, the propagation of electromagnetic waves
carries momentum, angular momentum, and energy but no information. With
measurement in a spatial range
=
- , in an angular range
=
-
, and
in a temporal range
=
- , the electromagnetic wave additionally carries
information.
As already discussed, information is naturally depicted by causal nets. A causal net
possesses two distinct elements: the places (represented by circles) and the
transitions (represented by rectangles). Figure 3 shows the two possibilities of an
elementary causal net [3].
Figure 3: The two Elementary Structures of a Causal Net
As shown in [3], every place has to be connected to a transition but not to
another place; and every transition has to be connected to a place but not to
another transition. Therefore, the two possibilities
-
- or
-
- define
the elementary net structures. It is proven in [3] that the places and the transitions
are dual entities, and that a causal net defines a continuum.

14
Carl Adam Petri shows in [3] that one can define a translation distance between the
places, and a synchronic distance between the transitions. The longer the
distances and are, the more places and transitions, respectively, are crossed.
Hence, and define two possible axes, one for the places and one for the
transitions, respectively. In [3] and are dimensionless, since they are applied to
the abstract structure of causal nets.
We have already shown in corollary 3 that we can identify the places with information
pieces . In this case, the translation distance defines an information axis, ,
defining distances among the information pieces , bearing in mind that several
information pieces can occupy the same location on the information axis defined by
. In the case of information, is not longer dimensionless (like for abstract causal
nets), but possesses the dimension of information, namely
= =
=
= =
=
.
The transitions define the interaction of the information pieces according to
equation
() above. Hence, we have information pieces
...
before the
interaction ( standing for "source"), such that said information pieces are connected
by the transition to information pieces
...
after the interaction ( standing for
"destination"), and such that
=
due to the law of conservation of
information [4].
Otherwise stated, we have a vector of information pieces before the interaction [4]:
=
, and we have a vector of information pieces after the interaction [4]:
=
. The transition acts as a matrix [4] projecting to . The matrix has
hence the structure
=
...
.
Hence, we have the equation [4]

15
=
...
.
showing how the information before the interaction is projected on the information
after the interaction. Hence, the matrix showing the interaction described by a
transition is dimensionless. We identify the matrix with causality, and conclude that
causality is dimensionless, contrary to information [4].
We can use the synchronic distance to define a causality axis, , defining distances
among the matrices , bearing in mind that several matrices can occupy the same
location on the causality axis defined by . In the case of causality, remains
dimensionless like for abstract causal nets.
The dimensions for information and for causality are the two additional
dimensions mentioned in corollary 8 [4].
In special relativity space and time are transformed from a resting frame to a frame
moving with velocity [1]. The magnitude of shall be called . The length
measured by the moving observer is contracted with respect to the length
measured by the resting observer according to the equation [1]:
= 1 -
=
1
with defining the speed of light in vacuum. is called the Lorentz factor [1].
The angle
measured by the moving observer is contracted with respect to the
angle
measured by the resting observer according to the equation [1]:
= 1 -
=
1
.

16
The time
measured by the moving observer is dilated with respect to the time
measured by the resting observer according to the equation [1]:
= 1 -
=
1
.
When approaches , the length
and the angle as measured by the moving
observer tend towards
0, and the time flow measured by the moving observer
tends towards
0 (otherwise stated: time flow stops for the moving observer when
= ).
The relativistic momentum
is given by the equation [1]:
=
=
1 -
with
as the mass of the resting particle. (We focus here on the magnitudes of the
vectors, since the vector directions are not relevant for our investigations.)
The relativistic angular momentum
can be derived as follows. We set up the cross
product
× between the spatial position vector and the particle velocity to obtain
[1]:
= |
× | =
×
1 -
.
(The vector is not subject to Lorentz contraction, since the movement occurs
around ). The relativistic energy
is given by the equation [1]:
=
=
1 -
.

17
Hence, when approaches , the particle momentum, angular momentum, and
energy tend towards
.
Information couples momentum with space, angular momentum with angle, and
energy with time in a multiplicative manner as already described in equation of point
4 above defining the information . Therefore, multiplications between
and ,
between
and , and between and lead in the cancellation of the Lorentz
factor .
Corollary 9: Hence, we conclude that the information
measured by the moving
observer is the same as the information measured by the resting observer.
Information is therefore an invariant with respect to Lorentz transformations.
Page 10 above defines the causality using the matrix . In the reference frame of
the moving observer we have then:
=
...
.
.
Since information is an invariant with respect to Lorentz transformations (see
corollary 9 above) the vectors
and
of the moving observer are the
same as the vectors
and
of the resting observer, respectively.
Therefore, the matrix
...
of the moving observer must be the same as
the matrix
...
of the resting observer.
Corollary 10: Hence, we conclude that the causality
measured by the moving
observer is the same as the causality measured by the resting observer. Causality
is therefore, like information, an invariant with respect to Lorentz transformations.

18
In physics, the conservation of momentum, angular momentum and energy are
fundamental principles as discussed above. Furthermore, we have derived a law
conservation of information in this paper.
Corollary 11: Since momentum and angular momentum are quantities describing the
features of particles, momentum and angular momentum can be viewed as features
of matter. Therefore, we have conservation laws describing how matter, energy, and
information are conserved [4].
In quantum mechanics, Werner Heisenberg has shown that the uncertainty principle
is valid when measuring the momentum range
and spatial range of a particle,
or the energy range
and the temporal location range of a particle. The same
applies, of course, for the measurement of the angular momentum range
of a
particle and the angle location range
of said particle. Carl Adam Petri shows in
[3] that the causal nets also possess an uncertainty relation. We therefore conclude
that the measurement of the information range
and the measurement of the
causality range
are also uncertain. Since the information range and the
causality range
always overlap (due to the structure of the causal nets), we
always have to consider
and in combination. Therefore, we have [4]:
(
)
(
)
(
)
(
)
The uncertainty of information in causality can be formally derived as follows.
is the
smallest possible information (see the last paragraph of point 4 above). Hence,
cannot be smaller than
, leading to the quantization of information. Hence, has to
be the identity element in this case, meaning that the particle is in a stable state, not
interacting with other particles [4].

19
The first uncertainty relation is explained by Werner Heisenberg in the following
manner [4]. The more accurate the measurement of a spatial position of a particle,
the smaller the wavelength of the measuring wave must be. But the smaller the
wavelength of the measuring wave is, the bigger the momentum of the measuring
wave is, such that the impact on the momentum of the measured particle is big,
leading to a big uncertainty of said momentum. The more accurate the measuring of
a momentum of a particle, the bigger the wavelength of the measuring wave must be,
in order not to influence the particle momentum. But the bigger the wavelength of the
measuring wave is, the less precise the measurement of the spatial position of the
particle is, leading to a big uncertainty is said position. Hence, momentum and space
cannot be determined with big accuracy at the same time.
The second uncertainty relation can be explained in a corresponding manner [4]. The
more accurate the measurement of a rotation angle of a particle, the smaller the
wavelength parallel to the rotational movement of the measuring wave must be. But
the smaller the wavelength parallel to the rotational movement of the measuring
wave is, the bigger the angular momentum of the measuring wave is, such that the
impact on the angular momentum of the measured particle is big, leading to a big
uncertainty of said angular momentum. The more accurate the measuring of an
angular momentum of a particle, the bigger the wavelength parallel to the rotational
movement of the measuring wave must be, in order not to influence the particle
angular momentum. But the bigger the wavelength parallel to the rotational
movement of the measuring wave is, the less precise the measurement of the
rotation angle of the particle is, leading to a big uncertainty is said rotation angle.
Hence, angular momentum and rotation angle cannot be determined with big
accuracy at the same time.
The third uncertainty relation is explained by Werner Heisenberg in a similar manner
[4]. The more accurate the measurement of a temporal position of a particle, the
bigger the frequency of the measuring wave must be. But the bigger the frequency of
the measuring wave is, the bigger the energy of the measuring wave is, such that the
impact on the energy of the measured particle is big, leading to a big uncertainty of
said energy. The more accurate the measuring of an energy of a particle, the smaller
the frequency of the measuring wave must be, in order not to influence the particle

20
energy. But the smaller the frequency of the measuring wave is, the less precise the
measurement of the temporal position of the particle is, leading to a big uncertainty is
said position. Hence, energy and time cannot be determined with big accuracy at the
same time.
The fourth uncertainty relation can be explained as follows [4]. The more accurate
the measurement of the information of a particle, the more isolated the particle must
be, in order to be able to measure its energy, momentum, and angular momentum
states. But the more isolated said particle is, the more interactions with other
particles are destroyed, leading to a big uncertainty of causality. The more accurate
the measurement of causality shall be, the more particle interactions shall be
observed. But the more particle interactions shall be observed, the less isolated
single particles shall be, leading to a big uncertainty of the information of a single
particle. Hence, information and causality cannot be determined with big accuracy at
the same time.
Corollary 12: The substances matter (as defined by its momentum and angular
momentum), energy, and information are quantized and lead to an uncertainty
relation in their existence forms, space, time, and causality [4].
Corollary 13: In the physical view of a single local reference frame, there are 8
dimensions: 3 dimensions of space,
( , , ), 3 complementary dimensions of space
given by the overall momenta (
( ,
), ( ,
), ( ,
)) [1], 1 dimension of time
( ), and 1 complementary dimension of time given by the energy ( ) [1]. In the
physical view of several local reference frames, there are 10 dimensions: the 8
dimensions of the single local reference frame, 1 dimension of causality
( ), and 1
complementary dimension of causality given by the information
( ) [4].
Thus we have derived a well founded reasoning for definition 1 defining a system in
automation systems.

21
2. The Functionality of Automation Processes
Distributed automation systems consist of components for the handling of material
flows. These material resources demand on their part energy handling resources. For
the influence of the material and energy resources information between the
components has to be exchanged; therefore information resources are required.
The material, energy and information systems of automation engineering interact with
each other. Only an integrated description of the resources respecting all interactions
guarantees a correct and exact modelling and prediction of the system behavior.
The automation theory is characterized by a given number of operations per spatial
volume
V
and per time interval
t
conducted by the technical resources. This fact
implies a
continuity equation
of the
substances
matter, energy and information,
respectively, in the world of automation engineering. Spatial, temporal and causal
translations of the substances can thus be regarded as transformations of
coordinates which are invariant in the quadratic form, like in the
special theory of
relativity
.
The formulation of a generalized continuity equation including spatial, temporal and
causal translations leads to
balance equations
for the different components of
automation systems. Balance equations can be evaluated for the computation of task
scheduling algorithms respecting the actual load of the technical resource.
For the formulation of the generalized continuity equation two problems have to be
solved: first, the measuring of causal translations which on its part implies the second
problem of a geometric derivation of causality. In physics the structure of the
existence forms
space and time is described by spatial and temporal (clocks)
measures.
The geometric derivation allows the integration of the existence forms with the four
dimensional spacetime of the special theory of relativity. This integration respects the
physical demands on spacetime translations describing them in a correct and precise
form.

22
Petri nets offer the excellent property of permitting a geometric derivation of
causality. The causal measures won from Petri nets define the necessary causal
ordering relations. A five dimensional generalized spacetime of automation
engineering including causality can now be derived.
The behavior of automation systems depends on the different causal states and of
allowed transitions implied by the selection rules for causal transitions. From this
point of view automation systems behave like physical microsystems. Therefore the
matrix transition elements of quantum mechanics can be transferred to causal matrix
transition elements. These operators are won analyzing the Petri net representation
of automation systems.
For the realization of spatial, temporal and causal translations technical resources
are required. In automation engineering the three classes material, energy and
information resources appear and interact with each other. Each class can be divided
into three functional
basic
elements: causal-processing (P), spatial-transportation (T)
and temporal-storage (S) functions [5].
In a production system P functions are represented by the machines, T functions by
the transportation system, and S functions by the storage units. Energy systems own
turbines and rural subscriber lines which act like P and T functions respectively;
accumulators represent the S functions. Processors and communication systems are
the P and T functions of information systems, respectively; the S functions are
specified by the memory of the information system. Table 1 summarizes these
reflections [5].
Substance
P
T
S
Matter
Production
Transportation
Storage
Energy
Transformation
Distribution
Storage
Information
Processing
Communication
Storage
Table 1: Substances and Functional Basic Elements

23
Every functional basic element is afflicted with an evaluation time
t
, which is required
for the task. The P functions effect a transformation:
)
,
(
P
t
x
f
P
=
x describes the causal state of the resource,
P
t is the processing time. The P
function transforms the causal state of the system in the time
P
t . For the T functions
equation
)
,
(
T
t
r
f
T
=
describes the spatial translations
r
conducted by the T functions in time
T
t
(communication time). Equation
)
,
(
S
t
t
f
S
=
specifies the temporal translations
t
conducted by the S functions in time
S
t .
Parameters
P
t ,
T
t
and
S
t are determined by the implementation of the resources.
The P, T, and S functions depict physical processes in a natural manner, see table 2.
Space (T)
Time (S)
Causality (P)
Matter
Magnetic
(Inductor L)
Electric
(Capacitor C)
Photonic
(Resistor R)
Energy
Heat
(Transitive)
Internal Energy
(Potential)
Work
(Interacting)
Transmission System
Storage System
Processing System

24
Information
(Communicating)
(Memorizing)
(Computing)
Table 2: Physical Processes
Causal dependencies of automation systems can be described in a simple graphic
and precise mathematic way using Petri nets, as already explained above. According
to figure 3 we have two possible elementary net structures. We can use the
translation distance between the places, and/or the synchronic distance between
the transitions. Since the approach using P, T, and S functions is based on
measuring the system states when defining a process, we shall use the translation
distance between the places as our measure for information [5]. The measure is
applied to pure nets in this case, such that is dimensionless like for pure nets, and
does not possess the dimension of information. Figure 4 and table 3 show how this
causal measure can be applied to a random causal structure.
Figure 4: Causal Structure

25
1 2 3 4 5 6 7 8
1
0
0
1
1
2
3
3
4
2
0
0
1
1
2
3
3
4
3
-1 -1 0
0
1
2
2
3
4
-1 -1 0
0
1
2
2
3
5
-2 -2 -1 -1 0
1
1
2
6
-3 -3 -2 -2 -1 0
-2 1
7
-3 -3 -2 -2 -1 2
0
3
8
-4 -4 -3 -3 -2 -1 -3 0
Table 3: Causal Distances according to
The rows represent the starting causal states, the columns represent the target
causal states. Row
i
and column
j
indicate the causal distance between state
i
and
state
j
. If
i=j
(start and target state are identical) the distance is 0. If there is no other
possibility to proceed from a state to the other than to proceed once in direction of
the arrows and once in the opposite arrow direction then the causal distance is also
set to 0 because the causal states are independent of each other. An example is
given in figure 4 between states 1 and 2 of the Petri net.
If one can proceed from a state to the other as well as in arrow direction as also
against arrow direction, a specification needs to be formulated about the positioning
of the coordinate system. This specification explains which of the two alternatives is
counted as positive and which as negative. Like in affine geometry the causal
distances of opposite direction are counted as negative. Every feedback loop is
building a
place invariant
. Therefore the global statement can be formulated that
every place invariant of the Petri net theory makes an orientation of the coordinate
system necessary.
For an integrated specification of automation systems spatial, temporal and causal
measures must be unified like in the special theory of relativity where space and time
build the four dimensional spacetime. The first step is to find a mapping of the causal
axis to another axis. Defining

26
t
t
k
=
as a causal normalized time specifies the time in seconds for a causal transition on
the axis.
k
t depends on the distance of two states in spacetime. A multiplication of
k
t with the causal distance leads to the time in seconds which is necessary for the
causal transition. By this method the causal axis can he mapped to another axis.
Every system state (eigenstate) can thus be characterized with a 5 toupel
)
,
,
,
,
(
4
3
2
1
0
x
x
x
x
x
where
0
x describes the temporal,
3
2
1
,
,
x
x
x
the three spatial and
4
x the
causal position in a five dimensional
generalized spacetime
.
Velocity of propagation for spatial translations is limited by the speed of light
c
.
Because of the embedding of causality in physical spacetime causal propagations
are limited by
c
too.
The causal axis acts
from
this point of view like
an
additional
spatial axis!
This point leads to the possibility of transferring the geometry of the special theory of
relativity to generalized spacetime. The generalized spacetime geometry is
determined by the invariance to translations of the quadratic form [5]:
2
4
2
3
2
2
2
1
2
0
)
(
x
x
x
x
x
x
q
+
+
+
+
-
=
For specification of the dynamic behaviour of automation systems quantum
mechanical methods can be used. The spatial eigenstates of the substances are
encoded in the wave function
(not to be confused with the electromagnetic wave
discussed above), transitions between different eigenstates including selection
rules are specified by matrix transition elements
|
| M
.
is a vector of
dimension (1,
n
) and is representing the starting state.
has the dimension (
n
, 1)
and represents the target state. The quadratic (
n
,
n
) matrix
M
encodes the selection
rules.
|
| M
is equal to the transition probability between state
and
[5].
Automation systems consist of discrete causal states (eigenstates). Translations
between causal states are determined by selection rules encoded in the Petri net

27
representation of the system. This similarity between quantum mechanical
microsystems and automation systems allows the quantum theoretical transferring of
matrix transition elements for description of selection rules to causality. Following
definition is made.
M
is a quadratic matrix which contains as many rows and
columns as the number of system eigenstates. lf the causal distance between
state i and state j is one, then the element
M (i, j) =
1 else
M (i, j) =
0. For the
causal structure of figure 4
M
is:
=
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
0
0
0
1
1
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
1
0
0
M
1
|
|
=
M
if the causal distance is one (allowed transition) else
0
|
|
=
M
[5].
In four dimensional spacetime the continuity equation is [1]:
0
=
+
J
t
Q
The first term specifies the changes of the substance
Q
per time unit, the second
term is the substance current. In generalized spacetime the existence of the
invariance of the quadratic form demands an additionally term specifying the causal
translations. Using causal matrix transition elements the generalized continuity
equation can be written as:
0
|
|
1
=
+
+
=
n
i
i
i
t
M
J
t
Q

28
In every automation or information system translations in generalized spacetime
consider the generalized continuity equation. In many applications the temporal
behavior of the system is of great interest, for example in applications with real-time
requirements. By evaluating the generalized continuity equation for some
subsystems balance equations arise and describe the load of the technical
resources. In dependence of this load the scheduling system estimates the
distribution of the tasks to the different technical resources.
Automation theory is characterized by a given number of operations per volume
V
and per time interval
t
. This fact implies the existence of a generalized continuity
equation including causal translations of material, energy and information resources.
The derivation of the generalized continuity equation is won by generalizing
spacetime geometry of the special theory of relativity to the world of automation
engineering, and by quantizing causal translations by causal matrix transition
elements transferred from quantum mechanics.
Evaluating the generalized continuity equation for different subsystems leads to the
derivation of balance equations. These equations are considered by the scheduling
system for the integrated task distribution in the automation system.
Thus we have derived a well founded reasoning for definition 2 defining a process in
automation systems by using the three functions P, T, and S applied to matter,
energy, and information, defining the nine elements of a process.
It can be finally concluded that:
Corollary 14: Systems consist of ten physical dimensions according to definition 1,
and are governed by the laws of conservation of momentum, angular momentum,
energy, and information. Information and causality are Lorentz invariant contrary to
momentum, angular momentum, energy, space, and time. Momentum and space,
angular momentum and angle, energy and time, and information and causality
possess an uncertainty relation, respectively.

29
Corollary 15: Processes consist of nine physical dimensions according to definition
2, and are governed by three functional elements P, T, and S applied to matter
energy, and information. Matter, energy, and information are described in a
generalized spacetime and are governed by a generalized continuity eqation,
respectively.
Corollary 16: Systems and processes define the structure and functionality of
automation systems in a physical complete manner by respecting all relevant
physical parameters: momentum, angular momentum, energy, information, space,
time, and causality. Hence, said systems and processes define automation theory in
a complete manner.
Bibliography
[1]: Carlo Maria Becchi; Massimo D'Elia: Introduction to the Basic Concepts of
Modern Physics; Special Relativity, Quantum and Statistical Physics; Third Edition;
Springer; 2016.
[2]: https://en.wikipedia.org/wiki/Planck_units#Cosmology
[3]: C. A. Petri: Nets, time and space; Theoretical Computer Science 153; 1996.
[4]: A. Mircescu: Physical Definition of Information and Causality, their Special
Relativistic and Quantum Mechanical Structures, and the Law of Conservation of
Information; GRIN: Catalog Number V343915, 2016: ISBN: 9783668365209.
[5]: A. Mircescu: Über die Beschreibung und Optimierung verteilter
Automatisierungssysteme, Doctoral Thesis, Technische Universität Carolo-
Wilhelmina zu Braunschweig, 1997.
Ende der Leseprobe aus 29 Seiten

Details

Titel
Automation Theory Defined by Systems and Processes
Autor
Jahr
2017
Seiten
29
Katalognummer
V351220
ISBN (eBook)
9783668378780
ISBN (Buch)
9783668378797
Dateigröße
697 KB
Sprache
Englisch
Schlagworte
automation theory
Arbeit zitieren
Dr. Alexander Mircescu (Autor:in), 2017, Automation Theory Defined by Systems and Processes, München, GRIN Verlag, https://www.grin.com/document/351220

Kommentare

  • Noch keine Kommentare.
Blick ins Buch
Titel: Automation Theory Defined by Systems and Processes



Ihre Arbeit hochladen

Ihre Hausarbeit / Abschlussarbeit:

- Publikation als eBook und Buch
- Hohes Honorar auf die Verkäufe
- Für Sie komplett kostenlos – mit ISBN
- Es dauert nur 5 Minuten
- Jede Arbeit findet Leser

Kostenlos Autor werden