# Main: StrongAggregatingAlgorithm

Strong Aggregating Algorithm is one of the exponential weights algorithms. It usually enjoys the strongest theoretical performance guarantees, although its computational efficiency may not be as good as some of its competitors'. In the case of a finite number of experts , it makes predictions according to the following scheme (some explanations can be found after the description of the algorithm):

Initialize weights of experts , .
Initialize cumulative losses .
FOR
Experts announce predictions .
Calculate the generalized prediction .
Algorithm announces , where is the chosen substitution function.
Reality announces .
.
.
Update the weights and normalize them , .
END FOR.

Figure 1. Aggregating algorithm in case of two possible outcomes,
two experts, and square-loss function
.

In this description, is the loss function chosen for the current game, and is the learning rate, a parameter of the algorithm. The protocol above means that Strong Aggregating Algorithm takes the losses suffered by the experts, transfers them into an exponential space by , mixes them using the weights of experts, and transfers them back from the exponential space; this way we receive the so-called generalized prediction. It is clear that the generalized prediction is not always a real prediction, and one should apply the substitution function to it. The substitution function is such a function that for we have for the smallest possible and for all . For various loss functions it can be shown that

for all and , where is the loss suffered by the algorithm over the first trials and is the loss suffered by the th expert over the first trials. For the mixable loss functions . In particular, and for the square-loss and the case when .

For the online linear regression problem with bounded signal , Vovk (2001) proves that for the square-loss function Aggregating Algorithm (Aggregating Algorithm Regression) can achieve

,

where is the loss of any linear function of .

Important special cases:

### Bibliography

• Vladimir Vovk. Aggregating strategies. In: Proceedings of the 3rd Annual Workshop on Computational Learning Theory (ed by M Fulk and J Case), pp 371 - 383. San Mateo, CA: Morgan Kaufmann, 1990.
• Vladimir Vovk. A game of prediction with expert advice. Journal of Computer and System Sciences 56:153 - 173, 1998.
• Vladimir Vovk. Competitive on-line statistics. International Statistical Review 69, 213–248 (2001).