Mathematical Programming

Techniques of competitive on-line prediction make it possible to find approximate (but "almost optimal") solutions to a sequence of mathematical programming problems with varying objective functions and constraints. There is an open problem in competitive on-line linear programming.

Bibliography

  • Peter Bartlett, Elad Hazan, and A. Rakhlin (2008). Adaptive online gradient descent. In Advances in Neural Information Processing Systems 20: Proc. of the 21st Annual Conf. (NIPS 2007). D. Koller, Y. Singer, and J. Platt, Eds. Advances in Neural Information Processing Systems, Vol. 20. Cambridge, MA: MIT Press, 2008.
  • Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal (2006). Logarithmic regret algorithms for online convex optimization. In COLT 2006, pages 499513.
  • Tatsiana Levina, Yuri Levin, Jeff McGill, and Mikhail Nediak (2007). Linear programming with online learning. Operations Research Letters, 35, 612-618.
  • Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Twentieth International Conference on Machine Learning, 2003.