Linear quadratic programming (LQ problem)
Linear quadratic programming (LQ problem, Linear Quadratic Problem) is a widely used method in control theory and optimisation problems, and is particularly important in the field of optimal control.
Linear quadratic programming aims to solve the following optimisation problems.
\[
\min_{x(t)} \int_0^T \left( x(t)^T Q x(t) + u(t)^T R u(t) \right) dt
\]
Where,
– \( x(t) \) is the state vector (e.g. a variable describing the system behaviour)
– \( u(t) \) is the control input vector (inputs for operating the system)
– \( Q \) is the state weight matrix
– \( R \) is the weight matrix of the control inputs
– \( T \) is the end time of the optimisation (usually the end time of the simulation of the system)
Furthermore, the dynamics of the system is assumed to be linear.
\[
\dot{x}(t) = A x(t) + B u(t)
\]
Where,
– \( A \) is the state transition matrix
– \( B \) is the control matrix
– \( u(t) \) is the control input vector
The objective of the LQ problem is to optimise the performance of the system by choosing the states and control inputs appropriately. The objective function takes into account the penalties (weightings) of the state and control inputs and seeks the optimal trade-off between state and control according to the dynamics of the system.
Based on these given dynamics models and cost functions, the LQ problem is the problem of calculating the optimal control input \( u(t) \), which investigates the relationship between the state and control inputs and finds the appropriate control law (usually the optimal control).
The Riccati equation (Riccati equation) plays an important role in the solution of the LQ problem. The Riccati equation is an equation for finding the state feedback control law (optimal control law), where the optimal control input \( u(t) \) is given by
\[
u(t) = -K x(t)
\]
Where,
– \( K \) is the gain matrix and is obtained by solving the Riccati equation.
– The Riccati equation has the form
\[
P = A^T P + P A – P B R^{-1} B^T P + Q
\]
Where,
– \( P \) is the solution of the Riccati equation (always a positive definite matrix)
– \( Q \) is the state weight matrix
– \( R \) is the weight matrix of the control inputs
In optimal control problems, the optimal control inputs are usually found in a state-dependent form, and the solution to the LQ problem, known as state feedback control, computes the control inputs based on the system state. This allows the optimum control to be determined in real time.
The characteristics of the LQ problem include the following
- Linearity: as both the system dynamics and the objective function are linear, a method for solving the optimal control problem is established.
- Quadratic cost function: as the objective function is a quadratic function of the state and control inputs, the optimisation problem becomes a convex optimisation problem and the solution is unique and stable.
- Riccati equation: solving the Riccati equation is the central method for obtaining the optimal solution.
Advantages and limitations of linear quadratic programming include
- Advantages:
- The solution is efficient and the computational cost is relatively low.
- It can be controlled using state feedback and is easy to implement.
- Uniqueness of the solution is guaranteed.
- Disadvantages:
- Not applicable when the system is not linear or when the objective function is not quadratic.
- Dynamic adjustment is difficult because the weights of the states and control inputs are fixed.
Linear-quadratic programming (LQ problem) is a powerful tool for efficiently solving optimal control problems, and the Riccati equation is an approach that can be used to find the optimal control law. It has been applied in a variety of systems and provides a theoretically stable and practical solution.
implementation example
An example implementation of linear-quadratic programming (LQ problems) in Python is described. This section shows how to use the SciPy library to solve the LQ problem in order to solve the optimal control problem.
Implementation overview:
- A linear system is defined using the state transition matrix\(A\)and the control matrix\(B\).
- Specify the state cost matrix\(Q\)and the control cost matrix\(R\).
- Solve the Riccati equation to obtain the optimal gain matrix \(K\).
- Calculate the optimal control \(u(t)=-Kx(t)\).
Library required.
pip install numpy scipy
Python code implementation
import numpy as np
import scipy.linalg
# System definition.
A = np.array([[1, 1], [0, 1]]) # state-transition matrix
B = np.array([[0], [1]]) # control matrix
Q = np.eye(2) # state-cost matrix
R = np.array([[1]]) # control cost matrix
# Solving the Riccati equation
P = np.linalg.solve_continuous_are(A, B, Q, R)
# Calculation of the optimal gain matrix K
K = np.linalg.inv(R).dot(B.T).dot(P)
# Display of results
print("optimal gain matrix K:")
print(K)
# initial state
x0 = np.array([[10], [0]]) # Initial state [position, speed].
# Simulation parameters
T = 10 # Simulation time
dt = 0.1 # time step
timesteps = int(T / dt)
# Calculation of system status and control inputs at every time step.
x = x0
states = []
controls = []
for t in range(timesteps):
u = -K.dot(x) # optimum control
states.append(x.flatten())
controls.append(u.flatten())
# Calculation of next state
x = A.dot(x) + B.dot(u) # state update
# Plotting the results
import matplotlib.pyplot as plt
states = np.array(states)
controls = np.array(controls)
plt.figure(figsize=(10, 6))
plt.subplot(2, 1, 1)
plt.plot(np.linspace(0, T, timesteps), states[:, 0], label="position x1")
plt.plot(np.linspace(0, T, timesteps), states[:, 1], label="velocity x2")
plt.xlabel("Time (s)")
plt.ylabel("State")
plt.legend()
plt.subplot(2, 1, 2)
plt.plot(np.linspace(0, T, timesteps), controls)
plt.xlabel("Time (s)")
plt.ylabel("control input u")
plt.show()
Code description.
- Definition of the system: state transition matrix\(A\), control matrix\(B\), cost matrix\(Q\) and control cost matrix\(R\). A simple 2D system with 1D position and velocity is assumed here.
- Solving the Riccati equation: use scipy.linalg.solve_continuous_are to solve the Riccati equation. This function can be used to solve the Riccati equation in continuous time.
- Calculate the optimal gain matrix: the optimal control law is \(u(t)=-Kx(t)\) and the gain matrix \(K\) is obtained. This is calculated from the solution of the Riccati equation.
- Simulation: the system state is updated at each time step from the initial state, the optimal control inputs are calculated and the state and control inputs are recorded.
- Plotting the results: finally, the state (position and velocity) and the control inputs\(u(t)\) are plotted against time.
Interpreting the results of the run
- State plot: if optimal control is used, the position and velocity converge with time.
- Control input plot: the control input \(u(t)\) varies over time to stabilise the system and eventually converges to zero in many cases.
CONCLUSION: This example implementation illustrates the basic concepts of linear-quadratic programming (LQ problem), where the optimal control is computed by solving the Riccati equation and simulating the system state and the behaviour of the control inputs. This method is used in a variety of fields, including robot control, automated vehicles and aircraft control.
Application examples
Linear quadratic programming (LQ problem) has been applied to many optimal control problems. Specific applications are described below.
1. automatic vehicle control: automatic vehicle control problems require optimal control of vehicle states (e.g. position, speed, attitude, etc.) The LQ problem is used to optimise the position and speed of a vehicle based on a linear model describing the vehicle dynamics.
- Objective: to control the speed and position of a vehicle with minimum energy while avoiding obstacles.
- Approach:
– Representation of the vehicle state ( x(t)) in terms of position, speed, attitude, etc.
– Control inputs ( u(t)) are acceleration and steering angle.
– By solving the LQ problem, the optimal control inputs are obtained and the vehicle is guided efficiently to the desired position. - Case studies:
- Self-driving technologies such as Google Waymo and Tesla use the LQ method to optimise the vehicle’s movement and control it to drive safely and efficiently.
2. robot arm control: the LQ problem is very effective in the control of industrial and research robot arms, where robot arms have complex dynamics due to their multiple joints, and LQ control can be applied to such systems.
- Objective: to ensure that the robot arm moves quickly and precisely to a specific position.
- Approach:
– The position, velocity and acceleration of the robot arm are defined as a state vector.
– The required torque and joint angle control inputs are calculated by solving the LQ problem. - Case study:
- Industrial robots: robot arms used in car manufacturing and electronics assembly efficiently assemble parts and achieve precise movements through LQ control.
- Medical robots: Surgical robots also use LQ control to perform precise movements.
3. aircraft flight control: LQ problems are of great importance in aircraft flight control. The aircraft’s attitude, speed and altitude must be optimally controlled.
- Objective: to stabilise the aircraft and ensure safe and efficient flight.
- Approach:
– Define the aircraft state ( x(t)) in terms of position, speed and attitude angle.
– LQ control is used to determine the optimum rudder angle and engine power. - Case study:
- Commercial aircraft: aircraft autopilot systems use LQ control to maintain in-flight stability and economical operation.
- Drones: the LQ method is also used in the control of small drones to ensure stable flight.
4. power grid control: the power grid optimisation problem uses the LQ problem to control power plants and transmission grids; LQ control can minimise costs while maintaining a balance between supply and demand of electricity.
- Objective: to distribute energy efficiently while maintaining a balance between supply and demand of electricity.
- Approach:
– The state of the electricity grid (e.g. output, voltage and frequency of each power plant) is defined as a state vector.
– LQ control is used to determine the optimum power plant output and distribution of power. - Case study:
- Power utility: the optimum distribution of power between power plants is determined using LQ control to improve energy efficiency and reduce costs.
5. optimising economic models: LQ problems are also applied in economics. In particular, in dynamic optimisation problems, LQ control is used to optimally adjust economic variables such as capital, labour and investment.
- Objective: to optimise the economic system and distribute resources efficiently.
- Approach:
– State of the economy model with variables such as GDP, consumption and investment.
– LQ control is used to optimise growth while maintaining economic stability. - Case studies:
- Simulation of economic policy: the LQ control is used to optimise the government’s economic policy, leading to optimal policy decisions.
LQ problems are widely used for optimal control in various fields and many real-world problems, such as cars, robots, aircraft, power grids and economic systems, have achieved efficient and stable control by using LQ methods LQ control, due to its simplicity and computational efficiency, is one of the most important modern optimisation techniques. It plays a very important role.
reference book
Reference books on linear-quadratic programming (LQ problems) are described.
2. ‘Optimal Control Theory: An Introduction’.
– Author: Donald E. Kirk
– Abstract: A classic introduction to optimal control theory. It describes linear-quadratic control problems (LQ problems) and covers the theoretical background and practical applications extensively. 3.
3. ‘Linear Systems and Signals’.
– Author: B.P. Lathi
– Abstract: This is an excellent textbook for learning the theory of linear systems and is ideal for learning the basic theory of linear control systems, including LQ problems. The key concepts in control engineering are explained in an easy-to-understand manner.
4. ‘Applied Optimal Control: Optimisation, Estimation, and Control’.
– Author(s): A. E. Bryson, Y. C. Ho
– Abstract: This book is about applications of optimised control and also covers topics related to the LQ problem and its extensions. It provides an in-depth study of the techniques and theory for optimising real-world problems. 5.
5. ‘Feedback Control of Dynamic Systems’.
– Author(s): Gene F. Franklin, J. Da Powell, Abbas Emami-Naeini
– Abstract: This is a comprehensive reference book on feedback control of dynamic systems, in which LQ control and optimal control theory are explained in detail. Very useful for a better understanding of control theory.
6. ‘Optimal Control: Linear Quadratic Methods’.
– Authors: Brian D. O’Anderson, John B. Moore
– Abstract: This book is dedicated to LQ control problems and provides detailed explanations from theoretical background to specific computational methods. Useful for those wishing to deepen their understanding of optimisation of linear systems. 7.
7. ‘Mathematical Methods for Engineers and Scientists’
– Author(s): K. B. K. L. Zieve
– Abstract: Introduces mathematical methods used in engineering and scientific fields and is intended for those who want to learn the theory of optimal control, including LQ problems, in a mathematically sound way.
8. ‘Convex Optimisation’
– Authors: Stephen Boyd, Lieven Vandenberghe
– Abstract: This book is about optimisation theory in general, but treats linear-quadratic problems (LQ problems) as part of the optimisation process; LQ control can also be formulated as a convex optimisation problem, so this book is useful for understanding LQ problems.
9. ‘Linear Control System Analysis and Design: 4th Edition’.
– Author(s): John J. D’Azzo, Constantine H. Houpis
– Abstract: This is a classic book on the design and analysis of linear control systems, covering a wide range of topics from the fundamentals of LQ control to its applications. It contains content that is also useful in practice.
コメント