Inductive Conformal Predictor

Inductive conformal predictors (ICPs) is a modification of conformal predictors in order to improve the computational efficiency for large data sets. They are also know as "split conformal predictors".


An inductive conformal predictor (ICM) determined by a conformity measure $A$ and an ascending sequence of positive integers $ 0 < m_1 < m_2 < \ldots$ is a confidence predictor $\Gamma: Z^*\times X\times (0,1)\rightarrow 2^Y$ ($2^Y$ is a set of all subsets of $Y$) such that the prediction sets $ \Gamma^\epsilon(x_1, y_1, \ldots, x_{n}, y_{n}, x_{n+1})$ are computed as follows:

  • if $n \le m_1$, set

$ \Gamma^\epsilon(x_1, y_1, \ldots, x_{n}, y_{n}, x_{n+1}) := \{y \in  Y: |{i = 1, \ldots,n+1: \alpha_i \le \alpha_{n+1} }|  \slash{} (n+1) > \epsilon\}$,


$\alpha_i := A(\{(x_1, y_1), \ldots, (x_{i-1}, y_{i-1}), (x_{i+1}, y_{i+1}), \ldots, (x_{n}, y_{n}),(x_{n+1},y)\}, (x_i, y_i)), i = 1, \ldots, n,$

$\alpha_{n+1} := A(\{(x_1, y_1), \ldots, (x_{n}, y_{n})\}, (x_{n+1}, y)).$

  • otherwise, find the $k$ such that $m_k < n \le m_{k+1}$ and set

$\Gamma^{\epsilon}(x_1, y_1, \ldots, x_{n}, y_{n}, x_{n+1}) := \{y \in  Y: |{i = m_k + 1, \ldots,n+1: \alpha_i \le \alpha_{n+1} }|  \slash{}(n + 1 - m_k) > \epsilon\}$,

where the nonconformity scores $\alpha_i$ are defined by

$\alpha_i := A(\{(x_1, y_1), \ldots, (x_{m_k}, y_{m_k})\}, (x_i, y_i)), i = m_k + 1, \ldots, n,$

$\alpha_n := A_{m_k + 1}(\{(x_1, y_1), \ldots, (x_{m_k}, y_{m_k})\}, (x_{n+1}, y))$

and $\{\ldots\}$ designates the bag (multiset) of observations.

The standard assumption for ICPs as well as for conformal predictors is randomness assumption (also called the i.i.d. assumption).

Inductive conformal predictors can be generalized by Mondrian conformal predictors to a wider class of confidence predictors. Conformal predictors can be considered as an important special case of ICPs.



All the statements in the section are given under the randomness assumption.

Smoothed ICPs are defined analogously to smoothed conformal predictors:

$ \Gamma^\epsilon(x_1, y_1, \ldots, x_{n}, y_{n}, x_{n+1})$

is set to the set of all labels $y \in  Y$ such that

$(|\{i=1,\ldots,n+1:\alpha_i<\alpha_{n+1}\}|+(\eta_y|\{i=1,\ldots,n+1:\alpha_i=\alpha_{n+1}\}|)\slash{}(n + 1 - m_k) > \epsilon$,

where $j = m_k + 1, ..., n+1$, the conformity scores $\alpha_i$ are defined as before and $\eta_y\in[0,1]$ is a random number.

As in the case of conformal predictors, we can formulate analogous statements for ICPs using the same terminology.

Theorem All smoothed ICPs are exactly valid.

Corollary All smoothed ICPs are asymptotically exact.

Corollary All ICPs are conservatively valid.

Corollary All ICPs are asymptotically conservative.

To put it simply, in the long run the frequency of erroneous predictions given by ICPs does not exceed $\epsilon$ at each confidence level $1 - \epsilon$.


As inductive conformal predictors are automatically valid, the main goal is to improve their efficiency: to make the prediction sets ICPs output as small as possible. See criteria of efficiency.

In comparison with corresponding conformal predictors determined by the same non-conformity measure, ICPs are inferior in terms of predictive efficiency (however, the loss in predictive efficiency appears to be small), but outperform the CPs in computational efficiency.