# On-line Learning

## Main.On-lineLearning History

Hide minor edits - Show changes to output

April 21, 2009, at 11:15 AM
by - lecture available

Changed lines 6-8 from:

* Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett.

~~Exponentiated~~ Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks.

~~To appear in JMLR~~.

to:

* Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett. Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks. The Journal of Machine Learning Research, 9, pp. 1775-1822 2008.

April 21, 2009, at 11:13 AM
by - lectures available

Changed line 3 from:

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential approach, when the update is represented as a gradient of a certain potential function. The [[Exponentiated Gradient]], for example, uses exponential potential function.

to:

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential approach, when the update is represented as a gradient of a certain potential function. The [[Exponentiated Gradient]], for example, uses exponential potential function. [[Winnow algorithm]] allows to get other bounds.

April 21, 2009, at 11:12 AM
by - lecture available

Changed lines 1-4 from:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. ~~The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to find such a function in a dual or primal form.~~

Lectures of ~~Nicolo Cesa-Bianchi about online learning are accessible ~~[[http://~~videolectures~~.~~net~~/~~mlss07~~_~~bianchi_onlle/~~ | ~~here~~]]~~. Lectures of Manfred K. Warmuth about the role of ~~[[http://~~en~~.~~wikipedia.org~~/~~wiki/Bregman~~_~~divergence~~ | ~~Bregman Divergences~~]] ~~(a very powerful proof technique) in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].~~

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential ~~approach, when the update is represented as a gradient of a certain~~ potential function.

Lectures

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the

to:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of [[http://en.wikipedia.org/wiki/Bregman_divergence | Bregman Divergences]] (a very powerful proof technique) in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential approach, when the update is represented as a gradient of a certain potential function. The [[Exponentiated Gradient]], for example, uses exponential potential function.

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential approach, when the update is represented as a gradient of a certain potential function. The [[Exponentiated Gradient]], for example, uses exponential potential function.

April 21, 2009, at 10:57 AM
by - lectures acessible

Added lines 3-4:

Several algorithms base on learning a perceptron $w \cdot x$ and get a bound on the number of mistakes in comparison with the optimization function of SVM. These algorithms use different updates for $w$. The most successful use the potential approach, when the update is represented as a gradient of a certain potential function.

April 21, 2009, at 10:42 AM
by - lectures accessible

Changed line 2 from:

Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of Bregman ~~Divergences~~ (a very powerful proof technique) in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

to:

Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of [[http://en.wikipedia.org/wiki/Bregman_divergence | Bregman Divergences]] (a very powerful proof technique) in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

April 21, 2009, at 10:38 AM
by - lectures accessible

Changed line 2 from:

Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of Bregman Divergences in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

to:

Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of Bregman Divergences (a very powerful proof technique) in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

April 21, 2009, at 10:35 AM
by - indroductory lecture

Added line 2:

Lectures of Nicolo Cesa-Bianchi about online learning are accessible [[http://videolectures.net/mlss07_bianchi_onlle/ | here]]. Lectures of Manfred K. Warmuth about the role of Bregman Divergences in online learning are accessible [[http://videolectures.net/mlss06tw_warmuth_olbd/ | here]].

July 10, 2008, at 09:11 PM
by - learn -- >find

Changed line 1 from:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to ~~learn~~ such a function in a dual or primal form.

to:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to find such a function in a dual or primal form.

Changed line 1 from:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to ~~learn~~ a maximum-margin classifier, like SVM, based on a maximization of a convex function. The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to learn such a function in a dual or primal form.

to:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to train on-line a maximum-margin classifier, like SVM, based on a maximization of a convex function. The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to learn such a function in a dual or primal form.

Added lines 1-7:

On-line learning is an approach to solve different machine learning tasks using algorithms of [[competitive on-line prediction]]. One of the most popular tasks is to learn a maximum-margin classifier, like SVM, based on a maximization of a convex function. The [[Exponentiated Gradient]] and [[Gradient Descent]] were somehow applied to learn such a function in a dual or primal form.

!!!Bibliography

* Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett.

Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks.

To appear in JMLR.

* Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ''Twentieth International Conference on Machine Learning'', 2003.

!!!Bibliography

* Michael Collins, Amir Globerson, Terry Koo, Xavier Carreras, and Peter Bartlett.

Exponentiated Gradient Algorithms for Conditional Random Fields and Max-Margin Markov Networks.

To appear in JMLR.

* Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ''Twentieth International Conference on Machine Learning'', 2003.