60  Control, feedback, and stability

Making a dynamic system behave

A motor overshoots the target speed. A drone wobbles after a gust. A reactor temperature drifts when the feed changes. In each case the system is already dynamic before anyone touches the controller. Feedback is the act of shaping that behaviour instead of merely watching it happen.

Control is not the same thing as solving a differential equation. Solving the equation tells you what the system will do. Control asks whether that behaviour is acceptable, how fast it settles, how much it overshoots, and how sensitive it is to disturbances and modelling error.

This chapter keeps the focus on the central move. You measure the output \(y(t)\), compare it to the target \(r(t)\), form the error \(e(t) = r(t) - y(t)\), and choose a control input \(u(t)\) that changes the dynamics in your favour. Once you see that structure clearly, stability and transient response stop looking like side remarks and start looking like the point of the whole subject.


60.1 The control problem

Start with four signals.

  • \(r(t)\) is the reference: what you want.
  • \(y(t)\) is the output: what the system is actually doing.
  • \(e(t) = r(t) - y(t)\) is the error.
  • \(u(t)\) is the control input you apply to the plant.

The plant is the system being controlled: a motor, furnace, vehicle, converter, or biochemical process. In Laplace-transform language the plant is often represented by a transfer function \(G(s)\), read “G of s.” The controller is represented by \(C(s)\), read “C of s.”

In open loop, you choose \(u(t)\) without using the measured output. In closed loop, you feed back the measured output and let the error signal drive the control action. That is the essential difference.

Unity feedback means the measured output \(Y(s)\) is fed back directly into the comparison point with no additional gain element in the feedback path. The block-diagram algebra is:

\[E(s) = R(s) - Y(s)\] \[U(s) = C(s)E(s)\] \[Y(s) = G(s)U(s)\]

Substitute in sequence:

\[Y(s) = G(s)C(s)\bigl(R(s) - Y(s)\bigr)\]

Rearrange:

\[\bigl(1 + C(s)G(s)\bigr)Y(s) = C(s)G(s)R(s)\]

So the closed-loop transfer function is

\[T(s) = \frac{Y(s)}{R(s)} = \frac{C(s)G(s)}{1 + C(s)G(s)}\]

That denominator matters more than it first appears to. The roots of \(1 + C(s)G(s) = 0\) are the closed-loop poles. They decide whether the response decays, oscillates, grows, or diverges.

Adjust the gain K below. Watch the step response, pole location, and error readouts change simultaneously.

60.2 What stability means

A control loop is stable if small disturbances do not produce outputs that grow without bound. In continuous-time linear systems, the quick rule is:

  • if every closed-loop pole has negative real part, the response decays
  • if any closed-loop pole has positive real part, the response grows
  • if poles sit on the imaginary axis, the system is marginal and small model errors may push it into trouble

This is why pole location is not bookkeeping. It is a statement about physical behaviour.

The same stability idea appears in state-space notation — the same feedback principle, expressed with vectors and matrices instead of transfer functions. For a state-space model

\[\dot{x} = Ax + Bu, \qquad u = -Kx\]

the closed-loop dynamics become

\[\dot{x} = (A - BK)x\]

Now the eigenvalues \(\lambda(A-BK)\) play the same role that closed-loop poles play in transfer-function form. They tell you what the state does in time.

NoteWhy This Works

Feedback changes the equation you are solving. Without control, the plant’s dynamics are built into \(G(s)\) or \(A\). With control, the controller feeds the measured output back into the input channel, so the effective denominator of the system changes from “whatever the plant had” to “plant plus feedback.”

That is the whole reason control is powerful. You are not passively accepting the poles, modes, or eigenvalues the plant came with. You are moving them.

60.3 The core method

For a first pass through a control problem, the workflow is usually:

  1. Write the plant model in transfer-function or state-space form.
  2. Decide what “good behaviour” means: stable, fast enough, low overshoot, small steady-state error, acceptable control effort.
  3. Choose a controller structure: proportional, PI, PID, state feedback, or another architecture appropriate to the system.
  4. Form the closed-loop model.
  5. Inspect the poles or eigenvalues and connect them back to the time response.
  6. Tune, then check what the tuning costs you elsewhere.

The important habit is to keep the interpretation attached to the algebra. A gain value is not just a number. It changes speed, error, sensitivity, and sometimes noise amplification.

60.4 Worked example 1: cruise control with proportional feedback

Suppose a simplified vehicle-speed model has plant transfer function

\[G(s) = \frac{1}{5s + 1}\]

where input is throttle command and output is speed deviation from the desired operating point. Use a proportional controller

\[C(s) = K\]

with unity feedback.

The closed-loop transfer function is

\[T(s) = \frac{KG(s)}{1 + KG(s)} = \frac{K/(5s+1)}{1 + K/(5s+1)} = \frac{K}{5s + 1 + K}\]

So the closed-loop pole is at

\[s = -\frac{1+K}{5}\]

For any \(K > -1\), the pole is in the left half-plane, so the linear model is stable. If \(K > 0\), increasing \(K\) moves the pole further left, which makes the response faster.

The steady-state gain is

\[T(0) = \frac{K}{1+K}\]

so proportional feedback alone does not remove step-tracking error completely. For a unit step reference, the steady-state output is \(K/(1+K)\) and the steady-state error is

\[1 - \frac{K}{1+K} = \frac{1}{1+K}\]

This is the standard tradeoff. Larger \(K\) reduces error and speeds the loop, but in a more realistic model it also increases sensitivity to unmodelled dynamics, actuator limits, and measurement noise.

The two-panel chart below shows both sides of this tradeoff at once. Drag the cursor to explore.

60.5 Worked example 2: state feedback for a two-state system

Consider the linear system

\[\dot{x} = Ax + Bu\]

with

\[A = \begin{pmatrix} 0 & 1 \\ -2 & -1 \end{pmatrix}, \qquad B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}\]

Choose state feedback

\[u = -Kx, \qquad K = \begin{pmatrix} 3 & 2 \end{pmatrix}\]

Then

\[A - BK = \begin{pmatrix} 0 & 1 \\ -2 & -1 \end{pmatrix} - \begin{pmatrix} 0 \\ 1 \end{pmatrix} \begin{pmatrix} 3 & 2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -5 & -3 \end{pmatrix}\]

Its characteristic polynomial comes from the 2×2 determinant:

\[\det(\lambda I - (A-BK)) = \det\begin{pmatrix} \lambda & -1 \\ 5 & \lambda+3 \end{pmatrix} = \lambda(\lambda+3) + 5 = \lambda^2 + 3\lambda + 5\]

The eigenvalues are

\[\lambda = \frac{-3 \pm \sqrt{9-20}}{2} = -\frac{3}{2} \pm \frac{\sqrt{11}}{2}i\]

Both eigenvalues have negative real part, so the closed-loop system is stable. Compared with the uncontrolled system, the controller has shifted the system’s modes. That is the state-space version of pole placement.

The interactive below lets you explore how the gain vector \(K = (k_1, k_2)\) shapes the eigenvalues and phase portrait.

This is the language used constantly in robotics, flight control, and embedded control software. The code may be digital and the sensors noisy, but the core mathematical question is still: what did your feedback law do to the modes?

60.6 Worked example 3: stabilising a scientific instrument

An optics lab wants the intensity of a laser to stay near a target despite thermal drift. A simplified linear model of the plant is slow and first-order, so the experimentalist begins with a proportional controller for the same reason an engineer would: it is cheap to implement and easy to reason about.

If the gain is too low, the output drifts and the reference is not tracked closely enough. If the gain is pushed too high, noise and delay in the sensor chain start to matter. The mathematics is exactly the same as the motor-speed example. What changes is the vocabulary of the lab.

This is the point of the optional viewpoint. Control theory is not owned by electrical engineering. It is a general mathematical structure for shaping dynamics under measurement.

60.7 Where this goes

The most direct continuation is Estimation, inverse problems, and filtering. Real control systems rarely have direct access to every state they need. Once you start asking how to control a system, the next question is often how to estimate the quantities you cannot measure cleanly.

This chapter also sharpens the way you should read later Volume 8 material on sampled systems and computational models. A simulation is not yet a controller. A model can be numerically stable and still be a bad control design. Volume 8 keeps returning to that distinction.

TipApplications
  • motor-speed control in electric drives
  • altitude hold and attitude stabilisation in drones
  • furnace and reactor temperature control
  • active vibration suppression in flexible structures
  • instrument stabilisation in optics and experimental physics
  • feedback loops inside robotics and autonomous systems

60.8 Exercises

These are project-style exercises. Each one asks you to connect the algebra to behaviour, not just compute a number.

60.8.1 Exercise 1

A thermal plant is modelled by

\[G(s) = \frac{2}{10s + 1}\]

with proportional controller \(C(s) = K\) and unity feedback.

  1. Derive the closed-loop transfer function.
  2. Find the closed-loop pole.
  3. For a unit-step reference, compute the steady-state error.
  4. Compare what changes when \(K = 1\) and when \(K = 4\).

60.8.2 Exercise 2

A two-state model has

\[A = \begin{pmatrix} 0 & 1 \\ -1 & -0.5 \end{pmatrix}, \qquad B = \begin{pmatrix} 0 \\ 1 \end{pmatrix}, \qquad K = \begin{pmatrix} 4 & 1.5 \end{pmatrix}\]

Compute \(A-BK\) and decide whether the closed-loop system is stable.

60.8.3 Exercise 3

A lab instrument is stable but too slow. The current proportional gain gives a closed-loop pole at \(s=-0.2\). The engineer proposes increasing the gain so the pole moves to \(s=-0.8\).

Write a short design note answering:

  1. What qualitative change will the lab see in the time response?
  2. Why might this still be a bad idea if the sensor is noisy or delayed?

60.8.4 Exercise 4

Choose one system from your own field: a drive, drone, room heater, queueing server with autoscaling, or experimental instrument. Identify:

  1. the reference
  2. the measured output
  3. the error signal
  4. the control input
  5. one reason high gain might help
  6. one reason high gain might hurt

Write the answer as a one-page systems sketch, not as prose only.

The diagram below shows the same unity-feedback loop relabelled for five different domains. Use it as a template for your sketch.