|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| The theory of [[optimal control]] is concerned with operating a [[dynamic system]] at minimum cost. The case where the system dynamics are described by a set of [[linear differential equation]]s and the cost is described by a [[quadratic polynomial|quadratic]] [[functional (mathematics)|function]] is called the LQ problem. One of the main results in the theory is that the solution is provided by the '''linear-quadratic regulator (LQR)''', a feedback controller whose equations are given below. The LQR is an important part of the solution to the [[Linear-quadratic-Gaussian control|LQG problem]]. Like the LQR problem itself, the LQG problem is one of the most fundamental problems in [[control theory]]. | | The writer's name is Andera and she thinks it seems fairly good. To play lacross is some thing I truly appreciate doing. Some time in the past he chose to live in North Carolina and he doesn't strategy on changing it. Distributing manufacturing has been his profession for some time.<br><br>Also visit my site telephone psychic ([http://isaworld.pe.kr/?document_srl=392088 browse around this website]) |
| | |
| ==General description==
| |
| This means that the settings of a (regulating) controller governing either a machine or process (like an airplane or chemical reactor) are found by using a mathematical algorithm that minimizes a cost function with weighting factors supplied by a human (engineer). The "cost" (function) is often defined as a sum of the deviations of key measurements from their desired values. In effect this algorithm finds those controller settings that minimize the undesired deviations, like deviations from desired altitude or process temperature. Often the magnitude of the control action itself is included in this sum so as to keep the energy expended by the control action itself limited.
| |
| | |
| In effect, the LQR algorithm takes care of the tedious work done by the control systems engineer in optimizing the controller. However, the engineer still needs to specify the weighting factors and compare the results with the specified design goals. Often this means that controller synthesis will still be an iterative process where the engineer judges the produced "optimal" controllers through simulation and then adjusts the weighting factors to get a controller more in line with the specified design goals.
| |
| | |
| The LQR algorithm is, at its core, just an automated way of finding an appropriate [[state space (controls)|state-feedback controller]]. As such it is not uncommon to find that control engineers prefer alternative methods like [[full state feedback]] (also known as pole placement) to find a controller over the use of the LQR algorithm. With these the engineer has a much clearer linkage between adjusted parameters and the resulting changes in controller behavior. Difficulty in finding the right weighting factors limits the application of the LQR based controller synthesis.
| |
| | |
| <!-- The final paragraph of this article assumes that the final controlled signal is the most important result of control. This may seems like an obvious statement, but, there are examples where the cost of the control input is of similar magnitude to the cost of having the system deviate. In such applications it may be preferable to allow the system to deviate. An example that I am familiar with (although not the most intuitive example) is load control of airfoils. Near stall conditions sucking, blowing and periodic excitation are all methods of delaying stall. None of the above mentioned methods will control stall indefinitely. While all methods can require significant energy inputs that may exceed the capabilities of the power source. Given that none of these methods can always control the system, situations will arise where control is not possible. In this situation a state-feedback controller will try harder and harder to correct the system, possibly burning out the power source, or causing serious damage to other components. An example of irreversible damage is in Mig 21 fighter planes. They used the blowing of hot exhaust to control stall in tight turns, it was common that in dog fights the blowing mechanism would actually burn up the wings destroying the plane. with an LQR the controller would recognize that control is hopeless and allow the system to deviate. In the Mig 21 example the plane would simply not be able to turn as tight. This may be undesirable in a dog fight, but nor is burning up and falling from the sky. Now maybe I do not understand what a state-feedback controller is, but this is my impression of the last paragraph-->
| |
| | |
| ==Finite-horizon, continuous-time LQR==
| |
| | |
| For a continuous-time linear system, defined on <math>t\in[t_0,t_1]</math>, described by
| |
| | |
| :<math>\dot{x} = Ax + Bu</math>
| |
| | |
| with a quadratic cost function defined as
| |
| | |
| :<math>J = \frac{1}{2} x^T(t_1)F(t_1)x(t_1) + \int\limits_{t_0}^{t_1} \left( x^T Q x + u^T R u \right) dt</math>
| |
| | |
| the feedback control law that minimizes the value of the cost is
| |
| | |
| :<math>u = -K x \,</math>
| |
| | |
| where <math>K</math> is given by
| |
| | |
| :<math>K = R^{-1} B^T P(t) \,</math>
| |
| | |
| and <math>P</math> is found by solving the continuous time [[Riccati differential equation]].
| |
| | |
| :<math>A^T P(t) + P(t) A - P(t) B R^{-1} B^T P(t) + Q = - \dot{P}(t) \,</math>
| |
| | |
| with the boundary condition
| |
| | |
| :<math>P(t_1) = F(t_1).</math>
| |
| | |
| The first order conditions for J<sub>min</sub> are
| |
| | |
| '''(i) State equation'''
| |
| :<math>\dot{x} = Ax + Bu</math>
| |
| | |
| '''(ii) [[Costate_equations | Co-state equation]]'''
| |
| :<math>-\dot{\lambda} = -Qx + A^T \lambda </math>
| |
| | |
| '''(iii) Stationary equation'''
| |
| :<math> 0 = Ru + B^T \lambda</math>
| |
| | |
| '''(iv) Boundary conditions'''
| |
| :<math> x(t_0) = x_0</math>
| |
| and
| |
| <math> \lambda(t_1) = F(t_1) x(t_1)</math>
| |
| | |
| ==Infinite-horizon, continuous-time LQR==
| |
| | |
| For a continuous-time linear system described by
| |
| | |
| :<math>\dot{x} = Ax + Bu</math>
| |
| | |
| with a cost functional defined as
| |
| | |
| :<math>J = \int_{0}^\infty \left( x^T Q x + u^T R u \right) dt</math>
| |
| | |
| the feedback control law that minimizes the value of the cost is
| |
| | |
| :<math>u = -K x \,</math>
| |
| | |
| where <math>K</math> is given by
| |
| | |
| :<math>K = R^{-1} B^T P \,</math>
| |
| | |
| and <math>P</math> is found by solving the continuous time [[algebraic Riccati equation]]
| |
| | |
| :<math>A^T P + P A - P B R^{-1} B^T P + Q = 0 \,</math>
| |
| | |
| ==Finite-horizon, discrete-time LQR==
| |
| | |
| For a discrete-time linear system described by
| |
| <ref>{{cite book |last= Chow |first= Gregory C. |title= Analysis and Control of Dynamic Economic Systems |publisher= Krieger Publ. Co. |year= 1986 |isbn= 0-89874-969-7}}</ref>
| |
| | |
| :<math>x_{k} = A x_{k-1} + B u_k \,</math>
| |
| | |
| with a performance index defined as
| |
| | |
| :<math>J = \sum\limits_{k=0}^{N} \left( x_k^T Q x_k + u_k^T R u_k \right)</math>
| |
| | |
| the optimal control sequence minimizing the performance index is given by
| |
| | |
| :<math>u_k = -F_k x_{k-1} \,</math>
| |
| | |
| where
| |
| | |
| :<math>F_k = (R + B^T P_k B)^{-1} B^T P_k A \,</math>
| |
| | |
| and <math>P_k</math> is found iteratively backwards in time by the dynamic Riccati equation
| |
| | |
| <math>P_{k-1} = Q + A^T \left( P_k - P_k B \left( R + B^T P_k B \right)^{-1} B^T P_k \right) A</math>
| |
| | |
| from initial condition <math>P_N = Q</math>.
| |
| | |
| ==Infinite-horizon, discrete-time LQR==
| |
| | |
| For a discrete-time linear system described by
| |
| | |
| :<math>x_{k+1} = A x_k + B u_k \,</math>
| |
| | |
| with a performance index defined as
| |
| | |
| :<math>J = \sum\limits_{k=0}^{\infty} \left( x_k^T Q x_k + u_k^T R u_k \right)</math>
| |
| | |
| the optimal control sequence minimizing the performance index is given by
| |
| | |
| :<math>u_k = -F x_k \,</math>
| |
| | |
| where
| |
| | |
| :<math>F = (R + B^T P B)^{-1} B^T P A \,</math>
| |
| | |
| and <math>P</math> is the unique positive definite solution to the discrete time [[algebraic Riccati equation]] (DARE)
| |
| | |
| <math>P = Q + A^T \left( P - P B \left( R + B^T P B \right)^{-1} B^T P \right) A</math>.
| |
| | |
| Note that one way to solve this equation is by iterating the dynamic Riccati equation of the finite-horizon case until it converges.
| |
| | |
| ==References==
| |
| <references/>
| |
| :*{{cite book
| |
| | last = Kwakernaak, Huibert and Sivan, Raphael
| |
| | first =
| |
| | authorlink =
| |
| | year = 1972
| |
| | title = Linear Optimal Control Systems. First Edition
| |
| | publisher = Wiley-Interscience
| |
| | isbn = 0-471-51110-2
| |
| }}
| |
| | |
| :*{{cite book
| |
| | last = Sontag
| |
| | first = Eduardo
| |
| | authorlink = Eduardo D. Sontag
| |
| | year = 1998
| |
| | title = Mathematical Control Theory: Deterministic Finite Dimensional Systems. Second Edition
| |
| | publisher = Springer
| |
| | isbn = 0-387-98489-5
| |
| }}
| |
| | |
| ==External links==
| |
| * [http://www.mathworks.com/help/toolbox/control/ref/lqr.html MATLAB function for Linear Quadratic Regulator design]
| |
| * [http://reference.wolfram.com/mathematica/ref/LQRegulatorGains.html Mathematica function for Linear Quadratic Regulator design]
| |
| | |
| {{DEFAULTSORT:Linear-Quadratic Regulator}}
| |
| [[Category:Optimal control]]
| |
The writer's name is Andera and she thinks it seems fairly good. To play lacross is some thing I truly appreciate doing. Some time in the past he chose to live in North Carolina and he doesn't strategy on changing it. Distributing manufacturing has been his profession for some time.
Also visit my site telephone psychic (browse around this website)