Invested capital: Difference between revisions

From formulasearchengine
Jump to navigation Jump to search
 
 
Line 1: Line 1:
Andrew Simcox is the title his mothers and fathers gave him and he completely enjoys this title. Mississippi is exactly where his home is. What I love performing is football but I don't have the time recently. Invoicing is my profession.<br><br>Here is my website; real psychic readings ([http://si.dgmensa.org/xe/index.php?document_srl=48014&mid=c0102 http://si.dgmensa.org/])
In [[machine learning]], '''Weighted Majority Algorithm (WMA)''' is a meta-learning algorithm used  to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well.
 
Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1.
 
It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms <math> \mathbf{A} </math> is  
 
:<math>\mathbf{O(log|A|+m)}</math>
 
if one algorithm in <math> \mathbf{x}_i </math> makes at most <math> \mathbf{m} </math> mistakes.
 
There are many variations of the Weighted Majority Algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remain similar, with the final performances of the compound algorithm bounded by a function of the performance of the '''specialist''' (best performing algorithm) in the pool.
 
== See also ==
* [[randomized weighted majority algorithm]]
 
==References==
 
* Littlestone,N. & [[Manfred K. Warmuth|Warmuth,M.]] (1989). ''Weighted Majority Algorithm.'' IEEE Symposium on Foundations of Computer Science.
 
[[Category:Machine learning algorithms]]

Latest revision as of 16:30, 25 December 2013

In machine learning, Weighted Majority Algorithm (WMA) is a meta-learning algorithm used to construct a compound algorithm from a pool of prediction algorithms, which could be any type of learning algorithms, classifiers, or even real human experts. The algorithm assumes that we have no prior knowledge about the accuracy of the algorithms in the pool, but there are sufficient reasons to believe that one or more will perform well.

Assume that the problem is a binary decision problem. To construct the compound algorithm, a positive weight is given to each of the algorithms in the pool. The compound algorithm then collects weighted votes from all the algorithms in the pool, and gives the prediction that has a higher vote. If the compound algorithm makes a mistake, the algorithms in the pool that contributed to the wrong predicting will be discounted by a certain ratio β where 0<β<1.

It can be shown that the upper bounds on the number of mistakes made in a given sequence of predictions from a pool of algorithms A is

O(log|A|+m)

if one algorithm in xi makes at most m mistakes.

There are many variations of the Weighted Majority Algorithm to handle different situations, like shifting targets, infinite pools, or randomized predictions. The core mechanism remain similar, with the final performances of the compound algorithm bounded by a function of the performance of the specialist (best performing algorithm) in the pool.

See also

References

  • Littlestone,N. & Warmuth,M. (1989). Weighted Majority Algorithm. IEEE Symposium on Foundations of Computer Science.