On-line learning is an approach to solve different machine learning tasks using algorithms of competitive on-line prediction. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. Lectures of Nicolo Cesa-Bianchi about online learning are accessible here. Lectures of Manfred K. Warmuth about the role of Bregman Divergences (a very powerful proof technique) in online learning are accessible here.
Several algorithms base on learning a perceptron and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for . The most successful use the potential approach, when the update is represented as a gradient of a certain potential function. The Exponentiated Gradient, for example, uses exponential potential function. Winnow algorithm allows to get other bounds.
- Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett. Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks. The Journal of Machine Learning Research, 9, pp. 1775-1822 2008.
- Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Twentieth International Conference on Machine Learning, 2003.