The randomness assumption (also known as the IID assumption) is that the observations in a sequence are generated independently from the same probability distribution $Q$ on the space of possible observations $\mathbf{Z}$ (often $\mathbf{Z}=\mathbf{X}\times\mathbf{Y}$). A weaker (for a wide class of $\mathbf{Z}$, according to de Finetti's theorem) assumption is that of exchangeability.
The randomness assumption is used in stochastic prediction and conformal prediction. It is a standard assumption in machine learning. In applications, algorithms developed under this assumption (such as SVM) are often applied when the assumption is violated. However, if the observations $z_1,z_2,\ldots$, $z_i=(x_i,y_i)$, are coming from a stationary measure on $\mathbf{Z}^{\infty}$, the IID assumption can be often made "almost satisfied" by extending the objects $x_i$. For example, in the case of time series we may add the pre-history of $y_i$ to $x_i$ (and this will work very well if the time series is Markov).