|
|
(One intermediate revision by one other user not shown) |
Line 1: |
Line 1: |
| '''Navigation function''' usually refers to a function of position, velocity, acceleration and time which is used to plan robot trajectories through the environment. Generally, the goal of a navigation function is to create feasible, safe paths that avoid obstacles while allowing a robot to move from its starting configuration to its goal configuration.
| | Hi there, I am Andrew Berryhill. Distributing production is how he tends to make a residing. To play lacross is 1 of the things she loves most. For years she's been residing in Kentucky but her husband desires them to transfer.<br><br>My site; [http://www.january-yjm.com/xe/index.php?mid=video&document_srl=158289 clairvoyant psychic] |
| | |
| == Potential functions as navigation functions ==
| |
| | |
| [[File:pf as navigation function.png|thumb|350px|A potential function. Imagine dropping a marble on the surface. It will avoid the three obstacles and eventually reach the goal position in the center.]]
| |
| | |
| Potential functions assume that the environment or work space is known. Obstacles are assigned a high potential value, and the goal position is assigned a low potential. To reach the goal position, a robot only needs to follow the negative [[gradient]] of the surface.
| |
| | |
| We can formalize this concept mathematically as following: Let <math>X</math> be the state space of all possible configurations of a robot. Let <math>X_g \subset X</math> denote the goal region of the state space.
| |
| | |
| Then a potential function <math>\phi(x)</math> is called a (feasible) navigation function if <ref>Lavalle, Steven, [http://planning.cs.uiuc.edu/node369.html Planning Algorithms Chapter 8]</ref>
| |
| #<math>\phi(x)=0\ \forall x \in X_g</math>
| |
| #<math> \phi(x) = \infty</math> if and only if no point in <math> {X_{G}}</math> is reachable from <math> x</math>.
| |
| #For every reachable state, <math> x \in X \setminus {X_{G}}</math>, the local operator produces a state <math> x'</math> for which <math> \phi(x') < \phi(x)</math>.
| |
| | |
| == Navigation Function in Optimal Control ==
| |
| | |
| While for certain applications, it suffices to have a feasible navigation function, in many cases it is desirable to have an optimal navigation function with respect to a given cost [[functional]] <math>J</math>. Formalized as an [[optimal control]] problem, we can write
| |
| | |
| :<math>\text{minimize } J(x_{1:T},u_{1:T})=\int\limits_T L(x_t,u_t,t) dt</math>
| |
| :<math>\text{subject to } \dot{x_t} = f(x_t,u_t)</math>
| |
| whereby <math>x</math> is the state, <math>u</math> is the control to apply, <math>L</math> is a cost at a certain state <math>x</math> if we apply a control <math>u</math>, and <math>f</math> models the transition dynamics of the system.
| |
| | |
| Applying [[Bellman equation|Bellman's principle of optimality]] the optimal cost-to-go function is defined as
| |
| | |
| <math>\displaystyle \phi(x_t) = \min_{u_t \in U(x_t)} \Big\{ L(x_t,u_t) + \phi(f(x_t,u_t)) \Big\} </math>
| |
| | |
| Together with the above defined axioms we can define the optimal navigation function as
| |
| | |
| #<math>\phi(x)=0\ \forall x \in X_g</math>
| |
| #<math> \phi(x) = \infty</math> if and only if no point in <math> {X_{G}}</math> is reachable from <math> x</math>.
| |
| #For every reachable state, <math> x \in X \setminus {X_{G}}</math>, the local operator produces a state <math> x'</math> for which <math> \phi(x') < \phi(x)</math>.
| |
| #<math>\displaystyle \phi(x_t) = \min_{u_t \in U(x_t)} \Big\{ L(x_t,u_t) + \phi(f(x_t,u_t)) \Big\} </math>
| |
| | |
| == Stochastic Navigation Function ==
| |
| | |
| If we assume the transition dynamics of the system or the cost function as subjected to noise, we obtain a [[optimal control|stochastic optimal control]] problem with a cost <math>J(x_t,u_t)</math> and dynamics <math>f</math>. In the field of [[reinforcement learning]] the cost is replaced by a reward function <math>R(x_t,u_t)</math> and the dynamics by the transition probabilities <math>P(x_{t+1}|x_t,u_t)</math>.
| |
| | |
| == See also ==
| |
| | |
| *[[Control Theory]]
| |
| *[[Optimal Control]]
| |
| *[[robot control|Robot Control]]
| |
| *[[motion planning|Motion Planning]]
| |
| *[[Reinforcement Learning]]
| |
| | |
| ==References==
| |
| * {{citation
| |
| | last1 = LaValle | first1 = Steven M.
| |
| | title = Planning Algorithms
| |
| | edition = First
| |
| | publisher = Cambridge University Press
| |
| | year = 2006
| |
| | isbn = 978-0-521-86205-9
| |
| | url = http://planning.cs.uiuc.edu/
| |
| }}
| |
| * {{citation
| |
| | last1 = Laumond | first1 = Jean-Paul
| |
| | title = Robot Motion Planning and Control
| |
| | edition = First
| |
| | publisher = Springer
| |
| | year = 1998
| |
| | isbn = 3-540-76219-1
| |
| | url = http://homepages.laas.fr/jpl/book-toc.html
| |
| }}
| |
| | |
| ==External links==
| |
| * [http://github.com/johnyf/nfsim NFsim]: MATLAB Toolbox for motion planning using Navigation Functions.
| |
| | |
| ==References==
| |
| {{Reflist}}
| |
| | |
| [[Category:Robot control]]
| |
| | |
| | |
| {{robo-stub}}
| |
Hi there, I am Andrew Berryhill. Distributing production is how he tends to make a residing. To play lacross is 1 of the things she loves most. For years she's been residing in Kentucky but her husband desires them to transfer.
My site; clairvoyant psychic