Metric Entropy

The metric entropy technique in competitive on-line prediction is a technique to provide the theoretical bound for the loss of an algorithm which competes with wide benchmark class. Some wide benchmark classes of functions can be covered by an $\epsilon$-net with finite or countable number of cells. Then the algrorihm (named further strategy) which competes with these nets is applied. The resulting algorithm is an algorithm which competes with all such strategies for different values of $\epsilon$. It can compete with the whole class, because these nets cover all possible functions in all possible ways. The regret term depends on the value of the metric entropy of the chosen class (see Kolmogorov and Tikhomirov, 1959). For the class of functions from the Sobolev spaces $B_p^s$, compactly embedded in $C(X), X=[0,1]^m$ in the sense of compact ball one can achieve the regret term of order $N^{m \slash{}(m+s)}$, where $s \in (m,\infty)$.