scoresvideos
Control Theory
Table of Contents

🎛️control theory review

2.5 State-space representation

Citation:

State-space representation is a powerful tool in control theory, allowing engineers to model and analyze complex systems. It uses state variables to describe system dynamics, enabling a unified approach for multi-input, multi-output systems and facilitating advanced control techniques.

This method offers advantages over transfer function models, providing insights into internal system behavior. State-space equations, matrices, and concepts like controllability and observability form the foundation for designing effective control systems and state estimators.

State-space models

  • State-space models provide a mathematical framework for analyzing and designing control systems in the time domain
  • They represent the system dynamics using a set of first-order differential equations or difference equations
  • State-space models allow for a compact and systematic representation of multi-input, multi-output (MIMO) systems

Difference between state-space and transfer function models

  • State-space models describe the system dynamics using state variables, while transfer function models use input-output relationships
  • State-space models can handle MIMO systems directly, whereas transfer function models typically deal with single-input, single-output (SISO) systems
  • State-space models provide insights into the internal behavior of the system, while transfer function models focus on the external input-output behavior

Advantages of state-space representation

  • Allows for a unified treatment of MIMO systems
  • Provides insights into the internal structure and behavior of the system
  • Facilitates the design of state feedback controllers and observers
  • Enables the analysis of system properties such as controllability and observability
  • Supports the application of modern control techniques (optimal control, robust control)

State variables

  • State variables are a set of variables that completely describe the dynamic behavior of a system at any given time
  • They capture the memory effect of the system, i.e., the influence of past inputs on the current system state

Definition of state variables

  • State variables are the minimum set of variables required to fully characterize the system's behavior
  • They represent the internal states of the system that evolve over time based on the system dynamics and inputs

Selection of state variables

  • The choice of state variables is not unique and depends on the system and the desired representation
  • Common choices include physical variables (position, velocity, current, voltage) or mathematical variables (error, integral of error)
  • The selection should lead to a minimal and controllable/observable representation of the system

State-space equations

  • State-space equations describe the evolution of the state variables and the relationship between the state variables, inputs, and outputs
  • They consist of two sets of equations: the state equation and the output equation

General form of state-space equations

  • The state equation describes the dynamics of the state variables:

x˙(t)=Ax(t)+Bu(t)\dot{x}(t) = Ax(t) + Bu(t)

where $x(t)$ is the state vector, $u(t)$ is the input vector, $A$ is the state matrix, and $B$ is the input matrix

  • The output equation relates the state variables to the system outputs:

y(t)=Cx(t)+Du(t)y(t) = Cx(t) + Du(t)

where $y(t)$ is the output vector, $C$ is the output matrix, and $D$ is the feedthrough matrix

State equation vs output equation

  • The state equation governs the evolution of the state variables over time based on the current state and input
  • The output equation determines the system outputs as a function of the current state and input
  • The state equation captures the internal dynamics, while the output equation describes the external behavior

Linearization of nonlinear systems

  • State-space representation is particularly suited for linear systems, but it can also be applied to nonlinear systems through linearization
  • Linearization involves approximating a nonlinear system around an operating point using a first-order Taylor series expansion
  • The resulting linearized state-space model is valid in the vicinity of the operating point and facilitates the application of linear control techniques

State-space matrices

  • The state-space matrices ($A$, $B$, $C$, $D$) characterize the system dynamics and input-output relationships
  • They depend on the chosen state variables and the system parameters

State matrix A

  • The state matrix $A$ represents the dynamics of the state variables in the absence of external inputs
  • It captures the coupling between the state variables and their rates of change
  • The eigenvalues of $A$ determine the stability and dynamic behavior of the system

Input matrix B

  • The input matrix $B$ describes how the external inputs affect the rates of change of the state variables
  • It maps the inputs to the corresponding state equations
  • The columns of $B$ represent the influence of each input on the state variables

Output matrix C

  • The output matrix $C$ relates the state variables to the system outputs
  • It determines which combinations of the state variables are measured or observed
  • The rows of $C$ represent the contribution of each state variable to the outputs

Feedthrough matrix D

  • The feedthrough matrix $D$ represents the direct influence of the inputs on the outputs, bypassing the state variables
  • It is often zero in many practical systems, indicating no direct feedthrough from inputs to outputs
  • A non-zero $D$ matrix implies an algebraic relationship between inputs and outputs

Controllability

  • Controllability is a fundamental property of a control system that determines whether the system states can be steered to any desired state in finite time by applying appropriate control inputs

Definition of controllability

  • A system is said to be controllable if, for any initial state $x(t_0)$ and any desired final state $x(t_f)$, there exists a control input $u(t)$ that can transfer the system from $x(t_0)$ to $x(t_f)$ in finite time

Controllability matrix

  • The controllability matrix is a matrix that characterizes the controllability of a system
  • For a linear time-invariant system with state matrix $A$ and input matrix $B$, the controllability matrix is defined as:

C=[BABA2BAn1B]C = [B \quad AB \quad A^2B \quad \cdots \quad A^{n-1}B]

where $n$ is the number of state variables

Controllability tests

  • A system is controllable if and only if the controllability matrix has full row rank, i.e., rank($C$) = $n$
  • Alternative tests include the Popov-Belevitch-Hautus (PBH) test, which checks the rank of $[A - \lambda I \quad B]$ for each eigenvalue $\lambda$ of $A$

Controllable canonical form

  • The controllable canonical form is a special state-space representation in which the system is decomposed into controllable and uncontrollable subsystems
  • In this form, the state matrix $A$ and input matrix $B$ have a specific structure that highlights the controllability properties
  • The controllable canonical form facilitates the design of state feedback controllers

Observability

  • Observability is a dual concept to controllability and determines whether the system states can be reconstructed or estimated from the measured outputs

Definition of observability

  • A system is said to be observable if, for any initial state $x(t_0)$, the state $x(t)$ can be uniquely determined from the knowledge of the input $u(t)$ and output $y(t)$ over a finite time interval

Observability matrix

  • The observability matrix is a matrix that characterizes the observability of a system
  • For a linear time-invariant system with state matrix $A$ and output matrix $C$, the observability matrix is defined as:

O=[CT(CA)T(CA2)T(CAn1)T]TO = [C^T \quad (CA)^T \quad (CA^2)^T \quad \cdots \quad (CA^{n-1})^T]^T

where $n$ is the number of state variables

Observability tests

  • A system is observable if and only if the observability matrix has full column rank, i.e., rank($O$) = $n$
  • Alternative tests include the PBH test, which checks the rank of $[A - \lambda I; C]$ for each eigenvalue $\lambda$ of $A$

Observable canonical form

  • The observable canonical form is a special state-space representation in which the system is decomposed into observable and unobservable subsystems
  • In this form, the state matrix $A$ and output matrix $C$ have a specific structure that highlights the observability properties
  • The observable canonical form facilitates the design of state observers

Transformations of state-space models

  • State-space models can be transformed into equivalent representations while preserving the input-output behavior
  • Transformations are useful for simplifying the analysis, design, and implementation of control systems

Similarity transformations

  • Similarity transformations involve a change of basis in the state space using a nonsingular matrix $T$
  • The transformed state vector is given by $\tilde{x}(t) = Tx(t)$, and the transformed state-space matrices are:

A~=TAT1,B~=TB,C~=CT1,D~=D\tilde{A} = TAT^{-1}, \quad \tilde{B} = TB, \quad \tilde{C} = CT^{-1}, \quad \tilde{D} = D

  • Similarity transformations preserve the eigenvalues, controllability, and observability properties of the system

Coordinate transformations

  • Coordinate transformations are a type of similarity transformation that aims to simplify the state-space representation
  • Common coordinate transformations include translation, rotation, and scaling of the state variables
  • Coordinate transformations can be used to decouple the system dynamics or to align the state variables with physical quantities

Canonical forms

  • Canonical forms are standardized state-space representations that exhibit certain desirable properties
  • Examples of canonical forms include the controllable canonical form, observable canonical form, and Jordan canonical form
  • Canonical forms facilitate the analysis and design of control systems by providing a structured representation of the system dynamics

Solution of state-space equations

  • Solving state-space equations involves determining the time evolution of the state variables given the initial conditions and input signals
  • Several methods can be used to solve state-space equations, depending on the system properties and desired outcomes

State transition matrix

  • The state transition matrix, denoted as $\Phi(t, t_0)$, relates the state vector at time $t$ to the initial state vector at time $t_0$
  • It is defined as the solution to the homogeneous state equation: $\dot{\Phi}(t, t_0) = A\Phi(t, t_0)$ with $\Phi(t_0, t_0) = I$
  • The state transition matrix allows for the computation of the state vector at any time instant given the initial state

Matrix exponential

  • The matrix exponential is a mathematical operation that extends the concept of the scalar exponential to matrices
  • For a square matrix $A$, the matrix exponential is defined as:

eAt=I+At+(At)22!+(At)33!+e^{At} = I + At + \frac{(At)^2}{2!} + \frac{(At)^3}{3!} + \cdots

  • The matrix exponential is used to compute the state transition matrix: $\Phi(t, t_0) = e^{A(t-t_0)}$

Laplace transform approach

  • The Laplace transform is a powerful tool for solving linear differential equations, including state-space equations
  • By taking the Laplace transform of the state-space equations, the time-domain equations are converted into algebraic equations in the complex frequency domain
  • The solution in the time domain can be obtained by applying the inverse Laplace transform to the resulting expressions

Numerical methods for simulation

  • Numerical methods are used to approximate the solution of state-space equations when analytical solutions are not available or practical
  • Common numerical methods include the Euler method, Runge-Kutta methods, and the Dormand-Prince method
  • These methods discretize the continuous-time equations and iteratively compute the state vector at discrete time steps
  • Numerical simulations provide insights into the system behavior and facilitate the validation of control designs

Steady-state response

  • The steady-state response refers to the long-term behavior of a system when the transient effects have died out
  • It is characterized by the equilibrium points and their stability properties

Equilibrium points

  • Equilibrium points are the states at which the system remains in a steady condition if no external disturbances or inputs are applied
  • They are determined by setting the state equation to zero: $0 = Ax_e + Bu_e$, where $x_e$ and $u_e$ are the equilibrium state and input, respectively
  • The equilibrium points provide information about the long-term behavior and operating conditions of the system

Stability of equilibrium points

  • Stability analysis determines whether the system will converge to or diverge from an equilibrium point when subjected to small perturbations
  • An equilibrium point is stable if the system returns to the equilibrium state after a small disturbance
  • Stability can be assessed using techniques such as eigenvalue analysis (for linear systems) or Lyapunov stability theory (for nonlinear systems)

Steady-state error

  • Steady-state error refers to the difference between the desired output and the actual output of a system in the steady state
  • It is a measure of the system's ability to track or regulate a desired reference signal
  • Steady-state error can be analyzed using the final value theorem in the Laplace domain or by examining the steady-state gains of the system

Relationship between state-space and transfer function models

  • State-space models and transfer function models are two different representations of the same system dynamics
  • They provide complementary perspectives and have their own advantages and limitations

Obtaining transfer functions from state-space models

  • Transfer functions can be derived from state-space models by taking the Laplace transform of the state-space equations
  • The resulting transfer function matrix $G(s)$ relates the Laplace transforms of the outputs to the Laplace transforms of the inputs:

G(s)=C(sIA)1B+DG(s) = C(sI - A)^{-1}B + D

  • The transfer function matrix captures the input-output behavior of the system in the frequency domain

Minimal realization of transfer functions

  • Minimal realization refers to the process of finding a state-space model with the minimum number of state variables that reproduces a given transfer function
  • A minimal realization is both controllable and observable
  • Techniques such as the Gilbert realization or the Kalman decomposition can be used to obtain a minimal realization from a transfer function

Applications of state-space representation

  • State-space representation finds extensive applications in various areas of control system analysis and design
  • It provides a framework for advanced control techniques and system optimization

Control system design

  • State-space models facilitate the design of state feedback controllers, where the control input is determined based on the measured or estimated state variables
  • State feedback can be used to achieve desired closed-loop system performance, such as pole placement or optimal control
  • State-space representation allows for the design of multi-variable controllers that can handle MIMO systems effectively

Observer design

  • Observers, also known as state estimators, are used to estimate the state variables when they are not directly measurable
  • State-space models enable the design of observers, such as the Luenberger observer or the Kalman filter
  • Observers combine the measured outputs with the system model to provide an estimate of the complete state vector

Kalman filtering

  • Kalman filtering is a recursive algorithm for estimating the state of a dynamic system in the presence of noise and uncertainties
  • It is based on the state-space representation and provides an optimal estimate of the state variables by minimizing the mean square error
  • Kalman filtering is widely used in applications such as navigation, tracking, and sensor fusion

Optimal control

  • Optimal control aims to determine the control inputs that minimize a specified performance criterion or cost function
  • State-space models are well-suited for formulating and solving optimal control problems, such as the linear quadratic regulator (LQR) or the linear quadratic Gaussian (LQG) control
  • Optimal control techniques based on state-space representation can handle constraints, uncertainties, and multiple objectives in a systematic manner