Shortcut Defensive Forecasting

The Defensive Forecasting tecnique cannot provide regret term better than ⚠ $\sqrt{N}$. To avoid this, in Vovk (2007) the modified tecnique is proposed for the case when benchmark class is finite number of experts. To find a regret term, which is equal to the regret term achieved by Strong Aggregating Algorithm, author uses the following scheme. First he proves the

Lemma. Let ⚠ $\eta > 0, (\Omega=\{0,1\},\Gamma=[0,1],\lambda)$ - is the standard ⚠ $\eta$-mixable game of prediction and ⚠ $k \in [0,\eta]$. Then the process

⚠ $S_N := \exp{ (k\sum_{n=1}^N (\lambda(\omega_n,\gamma_n)-\lambda(\omega_n,\gamma_n^k)))}$

is a supermartingale (in the sense that expected (by ⚠ $\gamma$) value of ⚠ $S_N$ is less than ⚠ $S_{N-1}$).

Then he uses the Levin Lemma:

Lemma (Levin, Takemura). For any forecast-continuous supermartingale ⚠ $S: (\Gamma \times \Gamma \times \Omega)^* \to \mathbb{R}$ there exists a strategy for Forecaster ensuring that ⚠ $S_0 \ge S_1 \ge \dots$ regardless of the other players' moves.

Finally, the author proves the theorem with the regret term.

Theorem. There exists a strategy for Learner competing with ⚠ $K$ experts, that guarantees ⚠ $L_N \le L_N^k + \frac{\ln K}{\eta}$ for all ⚠ $N=1,2,\dots$ and all ⚠ $k=1,\dots,K$.

So the bound provided by Shortcut Defensive Forecasting tecnique is equivalent to the one provided by Strong Aggregating Algorithm.

Bibliography

  • Vladimir Vovk. Defensive forecasting for optimal prediction with expert advice, arXiv:0708.1503v1 [cs.LG]. arXiv.org e-Print archive, August 2007