State Space Control is a feedback control algorithm; it calculates the input to apply to a dynamic system in order to maintain the output of that system at a particular setpoint.
If you're familiar with PID control then at this point state space control must sound like basically the same thing, and to some extent that's true. Like PID, a state space controller measures the output of a system and uses that to calculate the appropriate control input to feed back into it.
Unlike PID control though (at least in most cases) state space control takes into account a model of the system that it's controlling. Defining a model for a given dynamic system can be a little tricky which makes using state space control slightly more complex than a PID controller.
Even still, what state space control lacks in simplicity, it more than makes up for with the fact that it can take into account apriori knowledge of the system's dynamics into its control algorithm. As a result, state space is generally better suited to controlling systems with complex dynamics as well as systems with multiple inputs and outputs. It has a number of other useful properties too, but we can get into those after discussing the fundamentals.
State space control assumes that a dynamic system maintains some internal set of variables called a "state" (represented by the vector which describes the current status of the system. The state, along with the current control input (represented by the vector
determine how the output of the system (represented by the vector
) will evolve over time. These dynamics are described by the following set of differential equations:
These equations basically say that the rate of change of the state as well as the system's output are linear combinations of the current state and the control input. These linear combinations are defined by four matrices:
is the state transition matrix, which describes the way in which the state would evolve over time in the absence of any control inputs
is the input matrix, which describes the effect of a control input on the state
is the observation matrix, it describes how the state is mapped onto the sensor measurements
is the direct transmission matrix, if any of the inputs are immediately reflected in the output, this matrix describes how this mapping should take place (they usually don't and this is left as zeros though)
(note that notation means that the matrix has N rows and M columns and X, U, Y are the number of states, inputs and outputs respectively)
It's possible to describe any linear time-invariant system by filling in these four matrices.
Assuming it's possible to directly measure the entire state (i.e ) implementing a state space controller is really simple. The state is simply multiplied by a control gain matrix
and the result is fed back into the plant (the system model). As a block diagram this looks like so:
Like PID gains, the matrix needs to be carefully tuned for a given system. Depending on the complexity of the system K can be quite large which makes this tuning very difficult. Fortunately state space control includes a formal method for tuning gains to arrive at what is called a Linear Quadratic Regulator. To calculate an LQR for your system head over to the Tune Those Gains! python notebook.
The control gain in the block diagram above is designed to drive the state to zero. To instead have the system follow a set point, the controller can be modified like so:
The matrix maps a reference input
into an offset to the control input
. That offset is just enough to hold the system in place system output reaches the setpoint as well as to cancel out the effect of the
matrix. The
matrix can be calculated entirely from the system matrices (
) so there's no need for tuning.
It's rarely the case that the entire state can be directly observed by the controller. If we're only able to partially able to observe the state then an estimator can be used to infer the remaining state variable so that the control gain can still do its job. Accordingly, the estimator lives between the output of the system and the control law like so:
An estimator works by maintaining an internal guess of the current state referred to as . It updates that state firstly using the system model like so:
Then it calculates the error between the expected system output and the measured output
, then projects that onto the state estimate like so:
Where is an estimator gain matrix which maps the state prediction error onto the state. Like the
matrix, the
matrix needs to be tuned appropriately for a given system. Fortunately this can also be done without too much guesswork. The process for doing so is described in the Tune Those Gains! python notebook included with this library.
So, how does the I-component from PID control figure into all of this? Strictly speaking in state space control, if the model is parameterised perfectly (and the actual system is perfectly linear) then there should be no need for integral control. However this is almost never the case and as a result a small amount of error can creep into the system. That error is assumed to manifest in the system model as a disturbance like so:
To cancel out that component, integral control is used to calculate an offset to the control input in accordance with:
Where is the integral gain matrix. The integral component
is then added to the control input alongside the contributions from the reference input and the control law like so:
This has been a pretty casual introduction to a quite a complex topic so if you're still not feeling comfortable with the concepts involved in state space control, have a look at the following resources:
- Feedback Control of Dynamic Systems - the textbook I used to learn about State Space Control
- Feedback Systems: An Introduction for Scientists and Engineers - a textbook written by the authors of the python control library used in this notebook!
- Control Tutorials - an online resource with worked examples of state space modeling and anaylsis (also used in this notebook)




