Mermin–Wagner theorem: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
en>KLBot2
m Bot: Migrating 1 interwiki links, now provided by Wikidata on d:Q1547975
en>Cuzkatzimhut
 
Line 1: Line 1:
In [[mathematics]], a '''relevance vector machine (RVM)''' is a [[machine learning]] technique that uses [[Bayesian inference]] to obtain [[Parsimony|parsimonious]] solutions for [[Regression analysis|regression]] and [[Statistical classification|classification]].<ref>{{cite journal | last=Tipping | first=Michael E. |title=Sparse Bayesian Learning and the Relevance Vector Machine |year=2001 |journal = [[Journal of Machine Learning Research]] |volume=1 |pages=211&ndash;244 |url=http://jmlr.csail.mit.edu/papers/v1/tipping01a.html }}</ref>
Greetings! I am Myrtle Shroyer. Hiring is her working day job now but she's always wanted her personal business. South Dakota is exactly where me and my husband reside. Doing ceramics is what her family members and her appreciate.<br><br>Also visit my web blog [http://www.Adosphere.com/poyocum std home test]
The RVM has an identical functional form to the [[support vector machine]], but provides probabilistic classification.
 
It is actually equivalent to a [[Gaussian process]] model with [[covariance function]]:
:<math>k(\mathbf{x},\mathbf{x'}) = \sum_{j=1}^N \frac{1}{\alpha_j} \varphi(\mathbf{x},\mathbf{x}_j)\varphi(\mathbf{x}',\mathbf{x}_j) </math>
where <math>\varphi</math> is the [[kernel function]] (usually Gaussian), and <math>\mathbf{x}_1,\ldots,\mathbf{x}_N</math> are the input vectors of the [[training set]].{{Citation needed|date=February 2010}}
 
Compared to that of [[support vector machine]]s (SVM), the Bayesian formulation of the RVM avoids the set of free parameters of the SVM (that usually require cross-validation-based post-optimizations). However RVMs use an [[expectation maximization]] (EM)-like learning method and are therefore at risk of local minima. This is unlike the standard [[sequential minimal optimization]] (SMO)-based algorithms employed by [[Support vector machine|SVM]]s, which are guaranteed to find a global optimum (of the convex problem).
 
The relevance vector machine is [[Software patents under United States patent law|patented in the United States]] by [[Microsoft]].<ref>{{cite patent
|country = US
|number = 6633857
|title = Relevance vector machine
|inventor = Michael E. Tipping
}}</ref>
 
== See also ==
* [[Kernel trick]]
 
== References ==
{{reflist}}
 
== Software ==
* [http://dlib.net dlib C++ Library]
* [http://www.terborg.net/research/kml/ The Kernel-Machine Library]
* [http://www.maths.bris.ac.uk/R/web/packages/rvmbinary/index.html rvmbinary:R package for binary classification]
 
==External links==
*[http://www.relevancevector.com Tipping's webpage on Sparse Bayesian Models and the RVM]
*[http://www.tristanfletcher.co.uk/RVM%20Explained.pdf A Tutorial on RVM by Tristan Fletcher]
 
[[Category:Classification algorithms]]
[[Category:Kernel methods for machine learning]]
[[Category:Non-parametric Bayesian methods]]

Latest revision as of 20:38, 24 November 2014

Greetings! I am Myrtle Shroyer. Hiring is her working day job now but she's always wanted her personal business. South Dakota is exactly where me and my husband reside. Doing ceramics is what her family members and her appreciate.

Also visit my web blog std home test