# Separation Curve

The usual goal in prediction with expert advice is to prove inequalities of the type

`⚠ $L_T \le c L_T(i) + a \ln n \qquad\qquad$`

(1)

where `⚠ $c$`

and `⚠ $a$`

are positive constants, `⚠ $L_T$`

is Master's loss over the first `⚠ $T$`

trials, `⚠ $n$`

is the number of experts, and `⚠ $L_T(i)$`

is the loss of expert `⚠ $i$`

, `⚠ $i=1,\ldots,n$`

. It is usually possible to improve one of the constants by making the other constant worse. The *winning set* for Master is the set of all pairs `⚠ $(c,a)$`

for which Master can enforce (1) no matter what the experts' predictions are. The *separation curve* is the boundary of the winning set. Vovk (1998) finds the separation curve for `⚠ $n\to\infty$`

and shows that the Aggregating Algorithm achieves (1) whenever `⚠ $(c,a)$`

is on the separation curve. The open problem is:

It is known that:

- The separation curve for a fixed number of experts is different from the separation curve for a large number of experts. This was shown by Cesa-Bianchi et al. (1996); see also Vovk 1998, Section 8.
- The Aggregating Algorithm (and even the Weighted Majority Algorithm) is not optimal for a fixed number of experts. This was shown by Littlestone and Long (1993 - 1994); see also Vovk 1998, Section 8.

### Bibliography

- Nicolo Cesa-Bianchi, Yoav Freund, David P. Helmbold, and Manfred K. Warmuth. On-line prediction and conversion strategies.
*Machine Learning*25:71 - 110, 1996. - Vladimir Vovk. A game of prediction with expert advice.
*Journal of Computer and System Sciences*56:153 - 173, 1998.