Relaxation (NMR): Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>Yobot
m WP:CHECKWIKI error fixes using AWB (9475)
 
Line 1: Line 1:
{{Infobox scientist
Hi there. Allow me start by introducing the writer, her title is Myrtle Cleary. My family life in Minnesota and my family members enjoys it. Body building is what my family members and I enjoy. Bookkeeping is her working day job now.<br><br>My webpage :: std testing at home ([http://nfldev.com/index.php?do=/profile-12302/info/ i thought about this])
| name              =
| image            = <!--(filename only)-->
| image_size        =
| alt              =
| caption          =
| birth_date        = 1967
| birth_place      =
| residence        =
| citizenship      =
| nationality      =
| fields            =
| workplaces        =
| alma_mater        =
| thesis_title      =
| thesis_url        =
| thesis_year      =
| doctoral_advisor  =
| academic_advisors =
| doctoral_students =
| notable_students  =
| known_for        =
| author_abbrev_bot =
| author_abbrev_zoo =
| influences        =
| influenced        =
| awards            =
| signature        = <!--(filename only)-->
| signature_alt    =
| website          = http://www.hutter1.net
| footnotes        =
| spouse            =
}}
'''Marcus Hutter''' (born 1967) is a German computer scientist and professor at the [[Australian National University]].  Hutter was born and educated in [[Munich]], where he studied [[physics]] and [[computer science]] at the [[Technical University of Munich]]. In 2000 he joined [[Jürgen Schmidhuber]]'s group at the Swiss [[Artificial Intelligence]] lab [[IDSIA]], where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on [[Kolmogorov complexity]] and  [[Ray Solomonoff]]'s theory of universal [[inductive inference]]. In 2006 he also accepted a professorship at the Australian National University in [[Canberra]].
 
Hutter's notion of universal AI describes the optimal strategy of an agent  that wants to maximize its future expected reward in some unknown dynamic environment, up to some fixed future horizon. This is the general [[reinforcement learning]] problem. Solomonoff/Hutter's only assumption is that the reactions of the environment in response to the agent's actions follow some unknown but [[Computability theory (computer science)|computable]] [[probability distribution]].
 
== Universal artificial intelligence (AIXI)==
Hutter uses Solomonoff's [[inductive inference]] as a mathematical formalization of Occam's razor.<ref>{{cite journal |author=Hutter, M. |title=On the existence and convergence of computable universal priors |journal=Algorithmic Learning Theory |volume=2842 |pages=298–312 |year=2003 |doi=10.1007/978-3-540-39624-6_24 |url=http://www.springerlink.com/content/9frc0g6kpn73ma46/ |arxiv=cs/0305052 |series=Lecture Notes in Computer Science |isbn=978-3-540-20291-2}}</ref> Hutter adds to this formalization the expected value of an action:  shorter ([[Kolmogorov complexity]]) computable theories have more weight when calculating the [[expected value]] of an action across all computable theories which perfectly describe previous observations.<ref>{{harvnb|Hutter|2004}}</ref>
 
At any time, given the limited observation sequence so far, what is the [[Bayes estimator|Bayes-optimal]] way of selecting the next action? Hutter proved that the answer is to use Solomonoff's universal [[prior distribution|prior]] to predict the probability of each possible future, and execute the first action of the best policy <ref>{{cite web |author=Hutter, M. |title=Principles of Solomonoff induction and AIXI |format=PDF |url=http://www.hutter1.net/publ/aixiaxiom2.pdf}}</ref> (a policy is any program that will output all the next actions and input all the next perceptions up to the horizon). A policy is the best if, on a weighted average of all the possible futures, it will maximize the predicted reward up to the horizon. He called this universal algorithm AIXI.
 
This is mainly a theoretical result. To overcome the problem that Solomonoff's prior is incomputable, in 2002 Hutter also published an [[asymptote|asymptotically]] fastest algorithm for all well-defined problems. Given some formal description of a problem class, the algorithm systematically generates all [[Mathematical proof|proofs]] in a sufficiently powerful [[axiomatic system]] that allows for proving time [[Upper and lower bounds|bounds]] of solution-computing programs. Simultaneously, whenever a proof has been found that shows that a particular program has a better time bound than the previous best, a clever resource allocation scheme will assign most of the remaining search time to this program. Hutter showed that his method is essentially as fast as the unknown fastest program for solving problems from the given class, save for an additive [[Constant (mathematics)|constant]] independent of the problem instance. For example, if the problem size is <math>n</math>, and there exists an initially unknown program that solves any problem in the class within <math>n^7</math> computational steps, then Hutter's method will solve it within <math>5n^7 + O(1)</math> steps. The additive constant hidden in the [[Big O notation|<math>O()</math> notation]] may be large enough to render the algorithm practically infeasible despite its useful theoretical properties.
 
Several algorithms approximate AIXI to make usable on a modern computer. The more computing power they are given, the more they behave like AIXI (their [[limit (math)|limit]] is AIXI).<ref>{{cite arXiv |last1=Veness |first1=Joel |author2=Kee Siong Ng |last3=Hutter |first3=Marcus |last4=Uther |first4=William  |last5=Silver |first5=David  |eprint=0909.0801 |title=A Monte Carlo AIXI Approximation |year=2009 |class=cs.AI}}</ref><ref>{{cite journal |last1=Veness |first1=Joel |author2=Kee Siong Ng |last3=Hutter |first3=Marcus |last4=Silver |first4=David |arxiv=1007.2049v1 |title=Reinforcement Learning via AIXI Approximation |year=2010 |journal=Proc. 24th AAAI Conference on Artificial Intelligence (AAAI 2010) |pages=605–611 }}</ref><ref>{{cite book |last=Pankov |first=S. |chapter=A computational approximation to the AIXI model |chapterurl=http://books.google.com/books?id=a_ZR81Z25z0C&pg=PA258 |editor=Pei Wang |title=Artificial General Intelligence, 2008: Proceedings of the First AGI Conference |url=http://books.google.com/books?id=a_ZR81Z25z0C |year=2008 |publisher=IOS Press |isbn=978-1-58603-833-5|pages=256–267}}</ref>
 
==Hutter Prize for Lossless Compression of Human Knowledge==
On August 6, 2006, Hutter announced the '''[[Hutter Prize]] for Lossless Compression of Human Knowledge''' with an initial purse of 50,000 Euros, the intent of which is to encourage the advancement of [[artificial intelligence]] through the exploitation of Hutter's theory of optimal universal artificial intelligence.
 
==Partial bibliography==
*{{cite book |first=Marcus |last=Hutter |title=Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability |url=http://books.google.com/books?id=NP53iZGt4KUC |date=2004 |publisher=Springer |isbn=978-3-540-22139-5 |ref=harv |authormask=1}}
*{{cite journal |first=Marcus |last=Hutter |title=On generalized computable universal priors and their convergence |journal=Theoretical Computer Science |volume=364 |issue=1 |pages=27–41 |year=2006 |doi=10.1016/j.tcs.2006.07.039 |url=http://www.sciencedirect.com/science/article/pii/S0304397506004889 |authormask=1}}
*{{cite journal |first=Marcus |last=Hutter |title=Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet |journal=Journal of Machine Learning Research |volume=4 |pages=971–1000 |year=2003 |url=http://www.jmlr.org/papers/volume4/hutter03a/hutter03a.pdf |format=PDF |authormask=1}}
*{{cite journal |first=Marcus |last=Hutter |title=The Fastest and Shortest Algorithm for All Well-Defined Problems |journal=International Journal of Foundations of Computer Science |volume=13 |issue=3 |pages=431–443 |year=2002 |authormask=1 |doi=10.1142/S0129054102001199}}
 
==References==
{{Reflist}}
 
==External links==
*[http://www.idsia.ch/~marcus/official/index.htm Home page]
*[http://prize.hutter1.net Hutter Prize for Lossless Compression of Human Knowledge]
*[http://www.vimeo.com/7321732 Video of Marcus Hutter's conference at Singularity Summit 2009 — Foundations of Intelligent Agents]
{{Authority control |VIAF=67174976 |LCCN=nb/2004/309647}}
<!-- Metadata: see [[Wikipedia:Persondata]] -->
{{Persondata
|NAME= Hutter, Marcus
|ALTERNATIVE NAMES=
|SHORT DESCRIPTION=Computer scientist
|DATE OF BIRTH=1967
|PLACE OF BIRTH=
|DATE OF DEATH=
|PLACE OF DEATH=
}}
{{DEFAULTSORT:Hutter, Marcus}}
[[Category:1967 births]]
[[Category:Living people]]
[[Category:Machine learning researchers]]
[[Category:German computer scientists]]
[[Category:Australian academics]]
[[Category:Technical University Munich alumni]]
[[Category:Australian National University faculty]]

Latest revision as of 17:10, 9 December 2014

Hi there. Allow me start by introducing the writer, her title is Myrtle Cleary. My family life in Minnesota and my family members enjoys it. Body building is what my family members and I enjoy. Bookkeeping is her working day job now.

My webpage :: std testing at home (i thought about this)