View
212
Download
0
Category
Tags:
Preview:
DESCRIPTION
Control System Design Slide 2
Citation preview
STATE SPACE MODEL REPRESENTATION
Frequency Domain Classical Approach: Laplace Transform
This approach is based on converting a system's differential equation to a transfer function.
It generates a mathematical model of the system that algebraically relates a representation of the output to a representation of the input.
The primary disadvantage: : It can be applied only
to linear, time-invariant systems or systems that can be approximated as such.
Why State-Space Model Modern Approach: State Space Model State-space approach can be used to represent nonlinear systems
that have backlash, saturation, and dead zone.
Also, it can handle, conveniently, systems with nonzero initial conditions.
Time-varying systems, (for example, missiles with varying fuel levels
or lift in an aircraft flying through a wide range of altitudes) can be represented in state space.
Multiple-input, multiple-output systems (such as a vehicle with input direction and input velocity yielding an output direction and an output velocity) can be compactly represented in state space.
Some Observations 1. We select a particular subset of all possible system variables and call the
variables in this subset state variables.
2. For an nth-order system, we write n simultaneous, first-order differential equations in terms of the state variables. We call this system of simultaneous differential equations state equations.
3. If we know the initial condition of all of the state variables at to as well as the system input for t > to, we can solve the simultaneous differential equations for the state variables for t > to.
4. We algebraically combine the state variables with the system's input and find all of the other system variables for t > to. We call this algebraic equation the output equation.
5. We consider the state equations and the output equations a viable representation of the system. We call this representation of the system a state-space representation.
Example
Let us now follow these steps through an example. Consider the RL network shown in Figure with an initial current of i(0). 1. We select the current, i(t), for which we
will write and solve a differential equation using Laplace transforms.
2. We write the loop equation,
Example 2
Restrictions Typically, the minimum number of state variables required to
describe a system equals the order of the differential equation. Thus, a second-order system requires a minimum of two state variables to describe it.
We can define more state variables than the minimal set;
however, within this minimal set the state variables must be linearly independent. For example, if vR(t) is chosen as a state variable, then i(t) cannot be chosen, because vR(t) can be written as a linear combination of i(t), namely VR(t) = Ri(t).
State variables must be linearly independent; that is, no state variable can be written as a linear combination of the other state variables, or else we would not have enough information to solve for all other system variables, and we could even have trouble writing the simultaneous equations themselves.
Another way to determine the number of state variables is to count the number of independent energy-storage elements in the system.
In Figure below there are two energy-storage elements, the capacitor and the inductor. Hence, two state variables and two state equations are required for the system.
State Space Model Representation
A time varying control system is a system in which one or more of the parameters of the system may vary as a function of time.
The state of a system is a set of variables whose values, together with the input signals and the equations describing the dynamics, will provide future state and output of a system.
The state variables describe the present configuration of a system and can be used to determine the future response, given the excitations inputs and the equations describing the dynamics.
The state differential equation
State Eq.
Output Eq.
Linearized state and output Eq.
General continuous-time linear dynamical system
Linear Time-invariant (LTI) state dynamics
Block diagram of the linear, continuous-time control system represented in state
space.
Example Example 1:
Example 2:
Assume voltage v(t) is the output Apply Kirchoffs Voltage and Current Laws
TF to state space
TF to state space
TF to state space
)(5)(532532
5
)(
)(1
23
123
sUsYsssssssU
sY
uydt
dy
dt
yd
dt
yd 532
2
2
3
3
If the second derivative of y is designated as x3; the first derivative is designated as x2,etc.
2132
1233 5532
xxandxxwith
uxxxx
3
2
1
3
2
1
3
2
1
001
5
0
0
235
100
010
x
x
x
y
u
x
x
x
x
x
x
TF to state space
State-Space Models to TFs
Example
Linearization is the process of finding a linear model of a system that approximates a nonlinear one. Over 100 years ago, Lyapunov proved that if a linearized model of a system is valid near an equilibrium point of the system and if this linearized model is stable, then there is a region around this equilibrium point that contains the equilibrium, within which the nonlinear system is also stable.
Basically this tells us that, at least within a region of an equilibrium
point, we can investigate the behavior of a nonlinear system by analyzing the behavior of a linearized model of that system.
This form of linearization is also called small-signal linearization.
Linearization
Linearization
Equilibruim points
Example
Consider nonlinear time-invariant system:
Assume that input u(t) fluctuates around u = 2
Find an operating point with uQ = 2 and a linearized model around it
y_nl(t): Nonlinear system output y_l(t): Linearized system output, for a square wave input u(t)
Example
Solution of state differential equation
t
dttt
0
)(exp)0()exp()( BuAxAx
ttaat dbuexetx
sUas
b
as
xsX
busaXxssX
buaxx
0
)( )()0()(
)()0(
)(
)()0()(
!!2
exp22
k
tttte
kkt AA
AIAA Converges for all finite t and any A.
The solution of state differential equation
).exp()()(
)()0()(
1
11
ttoftransformLaplacetheisss
ssss
AAI
BUAIxAIX
BuAxx
(t): Fundamental or state transition matrix.
t
dttt
0
)()0()()( Buxx
The solution to the unforced system (that s, when u=0) is simply:
Example
Example
Block Diagram Algebra
Introduction
A graphical tool can help us to visualize the model of a system and evaluate the mathematical relationships between their elements, using their transfer functions.
In many control systems, the system of equations can be written so that their components do not interact except by having the input of one part be the output of another part.
In these cases, it is very easy to draw a block diagram that represents the mathematical relationships in similar manner to that used for the component block diagram.
Reminder: Component Block Diagram
Block Diagram
It represents the mathematical relationships between the elements of the system.
The transfer function of each component is placed in box, and the input-output relationships between components are indicated by lines and arrows.
)()()( 111 sYsGsU
Block Diagram Algebra
Using block diagram, we can solve the equations by graphical simplification, which is often easier and more informative than algebraic manipulation, even though the methods are in every way equivalent.
It is convenient to think of each block as representing an electronic amplifier with the transfer function printed inside.
The interconnections of blocks include summing points, where any number of signals may be added together.
Block Diagram Representations for LTI Control Systems
(a) Cascaded system; (b) Parallel system; (c) Feedback (closedloop) system.
1st & 2nd Elementary Block Diagrams
Block in series:
Blocks in parallel with their outputs added:
21
1
2 GG)s(U
)s(Y 21
1
2 GG)s(U
)s(Y
3rd Elementary Block Diagram
Single-loop negative feedback
Two blocks are connected in a feedback arrangement so that each feeds into the other:
The overall transfer function is given by:
21
1
1 GG
G
)s(R
)s(Y
Feedback Rule
The gain of a single-loop negative feedback system is given by the forward gain divided
by the sum of 1 plus the loop gain
21
1
1 GG
G
)s(R
)s(Y
Closed Loop (Feedback) System
Y(s) = G1(s) G2 (s) E(s)
= G1(s) G2 (s) [R(s) H(s)Y(s)]
Y(s) [1+ G1(s)G2(s)H(s)] = G1(s)G2(s)R(s)
Y(s)/R(s) = G1(s)G2 (s)/[1+G1(s)G2(s)H(s)]
IIII
E(s)
Y(s)
1st Elementary Principle of Block Diagram Algebra
2nd Elementary Principle of Block Diagram Algebra
3rd Elementary Principle of Block Diagram Algebra
Example 1: Transfer function from a Simple Block Diagram
42
42
421
42
2
2
2
ss
s)s(T
s
ss
s
)s(T
)s(R
)s(Y)s(T
Block Diagram and its corresponding Signal Flow Graph
Compact alternative notation to the block diagram.
It characterizes the system by a network of directed branches and associated transfer functions.
The two ways of depicting signal are equivalent.
Closed-Loop System Subjected to a Disturbance
where lG1(s)H(s)l >> 1 and |G1(s)G2(s)H(s)l >> 1. In this case, the closed-loop transfer function CD(S)/D(S) becomes almost zero, and the effect of the disturbance is suppressed. This is an advantage of the closed-loop system
To draw a block diagram for a system, Write the equations that describe the dynamic behavior of
each component.
Then take the Laplace transforms of these equations, assuming zero initial conditions,
Represent each Laplace-transformed equation individually in block form.
Assemble the elements into a complete block diagram.
Procedures for Drawing a Block Diagram
Example
Example 2: TF from the Block Diagram Block Diagram Reduction:
Example 2: TF from the Block Diagram
Example 2: TF from the Block Diagram
Example 2: TF from the Block Diagram
Example 2: TF from the Block Diagram
Example 2: TF from the Block Diagram
42131
61521
1 GGGGG
GGGGG)s(T
Example: Find the equivalent transfer function
Basic Control Actions
Industrial Controllers
On-off Controllers Proportional Controllers Integral Controllers Proportional-plus-Integral Controllers Proportional-plus-Derivative Controllers Proportional-plus-Integral-plus-Derivative Controllers
Basic Operations of a Feedback Control
Think of what goes on in domestic hot water thermostat:
The temperature of the water is measured.
Comparison of the measured and the required values provides an error, e.g. too hot or too cold.
On the basis of error, a control algorithm decides what to do.
Such an algorithm might be:
If the temperature is too high then turn the heater off.
If it is too low then turn the heater on
The adjustment chosen by the control algorithm is applied to some adjustable variable, such as the power input to the water heater.
In a two-position control system, the actuating element has only two fixed positions, which are, in many cases, simply on and off.
Let the output signal from the controller be u(t) and the actuating error signal be e ( t ) .
In two-position control, the signal u(t) remains at either a maximum or minimum value, depending on whether the actuating error signal is positive or negative, so that
where U1 and U2 are constants. The minimum value U2 is usually either zero or U1.
Two-position controllers are generally electrical devices, and an electric solenoid-operated valve is widely used in such controllers
Two-Position or On-Off Control Action
(a) Liquid-level control system;
(b) (b) electromagnetic valve.
(a)Block diagram of an on-off controller; (b) block diagram of an on-off controller with differential gap.
Level h(t) versus t curve for the system
The range through which the actuating error signal must move before the switching occurs is called the differential gap. Such a differential gap causes the controller output u(t) to maintain its present value until the actuating error signal has moved slightly beyond the zero value.
For a controller with proportional control action, the relationship between the output of the controller u(t) and the actuating error signal e(t) is
Proportional Control Action
t
e(t)
1
t
u(t)
Kp
In a controller with integral control action, the value of the controller output u(t) is changed at a rate proportional to the actuating error signal e(t). That is,
Integral Control Action
The control action of a proportional plus-integral controller is defined by
Proportional-Plus-Integral Control Action
Ti: integral time
The control action of a proportional plus derivative controller is defined by
Proportional-Plus-Derivative Control Action
The combination of proportional control action, integral control action, and derivative control action is termed proportional-plus-integral-plus-derivative control action. This combined action has the advantages of each of the three individual control actions. The equation of a controller with this combined action is given by
Proportional-Plus-Integral-Plus-Derivative Control Action
The PID Algorithm
The PID algorithm is the most popular feedback controller algorithm used. It is a robust easily understood algorithm that can provide excellent control performance despite the varied dynamic characteristics of processes.
PID Controller
In the s-domain, the PID controller may be represented as:
In the time domain:
dt
tdeKdtteKteKtu d
t
ip
)()()()(
0
)()( sEsKs
KKsU d
ip
proportional gain integral gain derivative gain
Definitions In the time domain:
dt
tdeTdtte
TteK
dt
tdeKdtteKteKtu
d
t
i
p
d
t
ip
)()(
1)(
)()()()(
0
0
i
dd
i
p
iK
KT
K
KTwhere ,
proportional gain integral gain
derivative gain
derivative time constant integral time constant
Recommended