Kernel methods are a powerful tool of modern learning. This article provides an introduction to kernel methods through a motivating example of kernel ridge regression, defines reproducing kernel Hilbert spaces (RKHS), and then sketches a proof of the fundamental existence theorem.
Some results that appear to be important in the context of learning are also discussed.
On this page... (hide)
- 1. A Motivating Example: Kernel Ridge Regression
- 2. Reproducing Kernel Hilbert Spaces
- 3. Proof of the Existence Theorem
- 4. What RKHSs are and What They are not
- 5. RKHSs and Prediction in Feature Spaces
- 6. Hierarchies of Kernels
- 7. Tensor Products of Kernels
- 8. Some Popular Kernels
In this section we will introduce kernels in the context of ridge regression. The reader may skip this section and proceed straight to the next session if he is only interested in the formal theory of RKHSs.
A more detailed discussion of Ridge Regression and kernels can be found in Section 3 of Steve Busuttil's dissertation.
Suppose we are given a set of examples , where are signals and are outcomes or labels. We want to find a dependency between signals and outcomes and to be able to predict given a new . This problem is often referred to as the regression problem 1.
Let us start by restricting ourselves to linear dependencies of the form , where , is the standard scalar product in , and the prime stands for transposition (by default all vectors are assumed to be column vectors). The class of linear functions is not too reach, and we will need to progress to more sophisticated classes later.
The least squares is a natural, popular, and time-honoured (apparently going back to Legendre and Gauss) approach to finding . Let us take an minimising the sum of squared discrepancies
The projection on the subspace is always unique and well-defined. However if the vectors are linearly dependent (which always happens if the sample is small and ) this unique projection has multiple representations as their linear combination. The method of least squares thus does not necessarily define a unique regressor.
Consider a more general problem. Let us take and consider the expression
Why would anyone want to use ? There are three main reasons. First, ridge regression with always specifies a unique regressor as will be shown below. Secondly, a positive value of makes the problem easier computationally. These properties will be shown later in this section. Thirdly, the term performs the regularisation function. It penalises the growth of coefficients of and urges us to look for "simpler" solutions.
Let us find the solution of the ridge regression problem with . It is convenient to introduce a matrix ; the rows of are vectors (and the columns are vectors mentioned above). We get
Let us analyse this expression. The matrices , , and have the size . The matrix is positive semi-definite, i.e., for all (this follows from ). By adding we make the matrix positive definite, i.e., we have for all (indeed, ). Because every positive definite matrix is non-singular 2, must have the inverse. If , a solution to the ridge regression problem always exists and it is unique.
If and , the matrix becomes singular 3. The geometrical interpretation of this situation was discussed above.
As approach 0, the matrix may become close to singular. The numerical routines for finding will then become less and less stable: they will have to deal with very big or very small values and make large round-up errors. Taking a larger thus stabilises the computation as mentioned above.
Let us attempt a rough hypothetical analysis of the predictive performance of ridge regression for different values of . If is very big, the term completely overshadows and the predictive performance deteriorates. If is very small, we may encounter numerical problems. An optimal value should thus be neither too big no too small. In some sense it must be comparable in size to elements of . The exact choice of depends on the particular dataset.
Finally, let us go back to the term "ridge regression". One of the versions of its etymology is that the diagonal of forms a "ridge" added on top of the least squares matrix .
Using the matrix identity we can rewrite the ridge regression solution as follows. For an arbitrary the outcome suggested by ridge regression is and this can be rewritten as
Similar arguments concerning non-singularity apply to . The matrix has the size . This might seem a disadvantage compared to the primary form: it is natural to expect that in practice would be fixed and not too big, while the size of the sample may be quite large. However this formula allows us to develop important generalisations.
We can say that in the dual form. However there is a more interesting way to interpret the dual form formula. We have
Now let us try to extend the class of functions we use and consider a wider class. Suppose that , i.e., all s are numbers and we are interested in approximations by polynomials of degree , i.e., functions of the form . Of course we can write down for this case, perform the differentiation and find the solution as we did before. However there is a simpler argument based on the dual form.
Let us map into as follows: . Once we have done this, we can do linear regression on new "long" signals. If we use the dual form, we do not even have to perform the transformations explicitly. Because we only need scalar products, we can compute all the necessary products and substitute them into the dual form formula.
Let us write down a formal generalisation. The signals do not have to come from any longer. Let them be drawn from some arbitrary set 4 . Suppose that we have a mapping , where is some vector space equipped with a scalar product (dot-product space); the space is sometimes referred to as the feature space. We can use ridge regression in the feature space. The prediction of ridge regression on a signal can be written as
The space does not have to be finite-dimensional. However since every vector space with a scalar product can be embedded into a Hilbert space (see below for a definition) we can assume that it is Hilbert.
The transformation is of no particular importance to us. Once we know the kernel , we can perform regression with it. A justification for the ridge regression in the kernel case will be obtained below in Subsection 5.3.
It would be nice to have a characterisation of all without a reference to . A characterisation of this kind can be given.
It is easy to see that has the following properties:
- it is symmetric: for all ; this follows from the symmetry of the scalar product ;
- it is positive semi-definite: for every positive integer and every the matrix is positive semi-definite 5; indeed, is the Gram matrix of the images 6.
Surprisingly these two simple properties are sufficient. Let us call a function satisfying these two properties a kernel. Then the following theorem can be formulated.
We proved the "if" part when we defined kernels. The "only if" part follows from the results of the next sections, where we will show that the class of kernels coincides with the class of so called reproducing kernels.
In this section we introduce reproducing kernel Hilbert spaces (RKHS) and show some of their basic properties. The presentation is based mainly on [Aronszajn, 1943] and [Aronszajn, 1950] and the reader may consult these papers for more details; note that the former paper is in French.
A set of some elements is a Hilbert space if
- is a vector space over (Hilbert spaces over the complex plain can also be considered, but we shall restrict ourselves to in this article);
- is equipped with a scalar product (i.e., with a symmetric positive definite bilinear form);
- is complete w.r.t. the metric generated by the scalar product, i.e., every fundamental sequence of elements of converges.
Some authors require a Hilbert space to be separable, t.e., to have a countable dense subset. For example, [Aronszajn, 1943] reserves the name "Hilbert". We shall not impose this requirement by default.
As a matter of fact all separable Hilbert spaces are isomorphic (the situation is similar to that with finite-dimensional spaces; the separable Hilbert space is "countable-dimensional").
A typical (though not particularly relevant to this article) example of a Hilbert space is provided by , which is the space of all real-valued functions
on such that is Lebesgue-integrable w.r.t. the measure on with the scalar product . Another example is given by , which is the set of infinite sequences , , such that the sum converges. Both and on with the standard Lebesgue measure are separable; therefore they are isomorphic.
Let be a Hilbert space consisting of functions on a set . A function is a reproducing kernel (r.k.) for if
- for every the function (i.e., as the function of the second argument with fixed) belongs to
- the reproducing property holds: for every and every we have . A space admitting a reproducing kernel is called a reproducing kernel Hilbert space (RKHS).
Let us formulate and prove some basic properties of reproducing kernels.
- If a r.k. for exists, it is unique.
- If is a reproducing kernel for , then for all and we have .
- If is a RKHS, then convergence in implies pointwise convergence of corresponding functions.
Proof: In order to prove (1) suppose that there are two r.k. and for the same space . For every the function belongs to and, applying linearity and the reproducing property, we get
Property (2) follows immediately from the reproducing property and the Cauchy(-Schwarz-Bunyakovsky) inequality.
Property (3) follows from (2). Indeed, for all and we have
We shall now give an important "internal" characterisation of reproducing kernel Hilbert spaces.
Let consisting of real-valued functions on be a Hilbert space. Take and consider the functional mapping into . It is linear (in ) and is called the evaluation functional.
Note that the evaluation functional is not defined on : the elements of are in fact equivalence classes of functions that coincide everywhere up to a set of measure 0, and thus they are not really defined at every point 7.
Proof: The "only if" part follows from (2) from the previous theorem.
In order to prove the "if" part we need the Riess-Fischer Representation Theorem, which states that every continuous linear functional on a Hilbert space can be represented as the scalar product by some element of the space.
Take . Because the evaluation functional is continuous, there is a unique such that . We can define a mapping by . Let .
This criterion is quite important. The continuity of the evaluation functional means that it is consistent with the norm: functions and that are close with respect to the norm evaluate to values and that are close. If we consider functions from some space as hypotheses in machine learning and the norm on the space as a measure of complexity, it is natural to require the continuity of the evaluation functional. The theorem shows that all "natural" Hilbert spaces of functions are in fact reproducing kernel Hilbert spaces.
We have shown that a r.k. can be represented as . This implies that is
- symmetric due to the symmetry of the scalar product;
- positive semi-definite, i.e., for all the matrix is positive semi-definite; this holds since is the Gram matrix. Thus is a kernel according to the definition from the previous section. The following theorem shows that the classes of kernels and reproducing kernels coincide.
- is symmetric
- is positive semi-definite. If there is a space admitting as its reproducing kernel, it is unique.
In this section we will prove the existence theorem. Let be a kernel.
We start the proof by constructing a linear space of functions consisting of linear combinations , where is a positive integer, and . The linearity follows by construction.
The scalar product is defined after the following fashion. Let
Let and . We have
The function is symmetric because is symmetric. For from above we have
Let us evaluate , where and is some element from . We get
Because the form is positive semi-definite, the Cauchy(-Schwarz-Bunyakovsky) inequality holds for it and
The construction is not finished yet because is not necessarily complete. It remains to construct a completion of . It is well known that every linear space with a scalar product has a completion, which is a Hilbert space. However this argument cannot be applied here 8: we need a completion of a specific form, namely, consisting of functions . Note that, however, we have already proved Theorem 1 from Section 1: we can map into some Hilbert space so that the value of the kernel is given by the scalar product of images. The mapping is given by the obvious .
In this subsection we will construct a completion of .
Let be a fundamental sequence. For every the inequalities
Let consist of all functions thus obtained. Clearly, since each from is the pointwise limit of the sequence .
The scalar product on can be introduced as follows. If is the pointwise limit of and is the pointwise limit of , then .
Let us show that this limit exists. For all positive integers , , and we have
Let us show that the scalar product is independent of a choice of fundamental sequences converging to and . Consider two pairs of fundamental sequences, and converging to and and converging to .
Consider the expression . The sequence consisting of functions is clearly fundamental, therefore, as shown above, there must exist a limit . Let us evaluate this limit. There are coefficients and elements such that ( may change as varies). We have
The bilinearity of on is easy to check. The number is non-negative as a limit of non-negative numbers. More precisely, let be a fundamental sequence converging to pointwise. Because
We have shown that is indeed a linear space with a scalar product. Clearly, and the scalar product on extends that on .
Let us show that is complete. First, let be a fundamental sequence of elements of converging pointwise to . We have
It remains to show that the reproducing property holds of . It follows by continuity. Let be a fundamental sequence of elements of converging pointwise to . We have
We have constructed a RKHS for . Note that constructed in the previous subsection is dense in it.
Let us show that the RKHS for a particular kernel is unique. Let be the RKHS constructed above and be some other RKHS for the same kernel .
The definition of an RKHS implies that all functions must belong to . The same must be true of their linear combinations . Thus as a set.
Since the reproducing property holds on , on elements of the scalar product must coincide with scalar product we constructed above. Thus is a subspace of .
Because is complete, all fundamental sequences from should have limits in . In RKHSs convergence implies pointwise convergence and thus all pointwise limits of fundamental sequences from belong to . Thus as a set. Because the scalar product is continuous w.r.t. itself, we have
Let . We can represent it as , where and is orthogonal to and therefore to all functions , which belong to . Because the reproducing property holds on , we get
In this section we provide a selection of minor results and counterexamples concerning RKHSs. They will help the reader to get a better intuition of an RKHS and perhaps dissolve some misconceptions.
Suppose that is a topological space and is continuous (in both the arguments, i.e., on ). One can claim that any feature mapping associated with is continuous. Indeed,
If moreover is separable, then the RKHS is separable. Indeed we have shown in the proof of the existence theorem that finite linear combinations are dense in . The coefficients can be taken to be rational. Take a countable dense subset . Because the mapping is a feature mapping, it is continuous, and we can approximate any with some .
In most natural cases signals are strings of reals, i.e., and the kernel is continuous and therefore the RKHS is separable. Note however that inseparable RKHSs exist. Indeed, consider the space . It consists of functions such that everywhere except, perhaps, for a countable set and
The space is an RKHS. Indeed, consider
On the other hand is not separable. Indeed, the functions , , form an uncountable orthonormal system.
As we know from Theorem 2, convergence in an RKHS implies pointwise convergence of functions. Let us show that the opposite is not true.
A simple counterexample is provided by considered as a set of functions on the set of positive integers . It is an RKHS with
Requiring to be continuous and the domain to be connected will not change the situation; the counterexample can be easily extended to cover this possibility. Indeed, take . For each positive integer consider the "tooth" function equal to 0 outside , 1 at and linear on and . Let the space consist of functions , where . A function is continuous on , its values at integer points form a sequence from and the value at is given by , . Let the scalar product be induced by : . This space is isomorphic to .
The space is an RKHS with the following kernel. For a positive integer we have and for all . For , where is a positive integer, . This kernel is continuous on . On the other hand pointwise (and uniform) convergence does not imply convergence in the norm.
It is easy to do a further step and show that can be connected and compact. Note that for any sequence we have as and therefore for any we have as . We can define by continuity. The set can be mapped onto by .
The following intuition may be helpful. As explained above, an RKHS may be thought of as a Hilbert space of hypotheses and the norm as the hypothesis complexity. The functions uniformly close to zero but having non-zero norms correspond to overcomplicated hypotheses with no real explanatory power.
We know from part 1 of Theorem 2 that for each RKHS there is a unique kernel, i.e., if two kernels, and , specify the same RKHS, then they coincide. Here the words "the same RKHS" mean the same set of functions with the same scalar product on them.
What if the scalar product is different? Can two different kernels specify RKHSs with the same sets of functions?
It is easy to construct a trivial example. The kernel specifies the same set of functions as the kernel . Indeed, recall the construction from the existence theorem. The kernels and specify the same set of finite linear combinations with equivalent norms. A sequence is fundamental in one space if and only if it is fundamental in another and the completions, consisting of pointwise limits, will coincide.
A slightly less trivial example may be constructed as follows. Let and is given by . If is equipped with the standard Euclidean scalar product , we get a kernel . The RKHS corresponding to this kernel consists of functions of the form , where .
Let is given by . It specifies the kernel , which is not a multiple of . The RKHS corresponding to consists of functions of the form , where . It is easy to see that the set of functions is the set of polynomials of degree 1, just as in the first case. The kernels specifying the RKHSs with the same sets of functions thus do not have to be multiples of each other.
This argument can be generalised further as follows. Let specify kernel and let be a bounded linear operator in the Hilbert space . Consider the mapping and the kernel . The RKHS corresponding to this kernel consists of functions of the form . This can be rewritten as , where is the operator adjoint to . If (and therefore ) is invertible, ranges over the whole space and the RKHS for consists of the same functions as that for .
The following observation can also be made.
- for every kernel and , the kernel specifies the RKHS with the same set of functions as the RKHS for ;
- for every two kernels and on specifying the RKHSs with the same set of functions, the kernel specifies the RKHS with the same set of functions.
Proof: The multiplicative part has been already proved above. Let us prove the additive part (the multiplicative part can be proved in the same fashion).
Let and be feature mappings corresponding to and . Consider the Hilbert space with the "componentwise" scalar product (here and are the scalar products on and , respectively). Clearly the mapping given by specifies the kernel . The corresponding RKHS consists of functions of the form , where and . Since or can be equal to 0, the RKHS contains the RKHSs for and as sets. Since the two sets of functions are equal, adding them up does not produce anything new.
We have shown that the three definitions of a kernel are equivalent:
- a positive semi-definite symmetric function;
- a reproducing kernel;
- the scalar product in a feature space, i.e., , where maps into a Hilbert space . The RKHS for a particular kernel is unique. Note that uniqueness holds in a very strong sense: it is a unique set of functions with a uniquely defined scalar product; there are no isomorphisms or equivalences involved.
The mapping in the third definition is by no means unique. Indeed, let . Consider a right shift defined by . The composition will produce the same kernel as .
However there is some degree of uniqueness. Let be the closure of the linear span of all images , . It is isomorphic to the RKHS.
Proof: Let us denote the closure of the span by and the RKHS by . Let be the set of finite sums of the form , where and are some coefficients, as in the construction above.
We start by constructing the isomorphism of and . Put and, by linearity, . We need to show that is well-defined. Let for some coefficients and and elements . Then for every we have
The mapping preserves the scalar product:
The mapping is surjective. Indeed, each sum has an inverse image. The mapping is also injective. Assume the converse. Then there is a point such that but in the RKHS. This is a contradiction because preserves the scalar product and therefore the norm. Thus is a bijection.
Let us extend to the isomorphism of and . Let converge to . The sequence is fundamental in . Since preserves the scalar product on , the images form a fundamental sequence in . It should converge. Put .
Suppose that there are two sequences and converging to . Let us mix the sequences into (e.g., by letting and , ). The sequence converges to and is therefore fundamental. The images of must form a fundamental sequence in and must have a limit. All its subsequences should converge to the same limit. Thus and is well-defined.
The scalar product is preserved by continuity. The surjectivity can be shown as follows. Let and let converge to . The inverse images of must form a fundamental sequence in and must have a limit . It follows from the definition that . The injectivity on follows from the same argument as on .
The theorem follows.
The mapping can be extended to the mapping of the whole by letting , where is the projection operator. The mapping is no longer injective (unless coincides with ) and no longer preserves the scalar product. However we have , where the minimum is attained on the projection .
The above construction allows us to construct an interpretation of RKHS important for machine learning.
In competitive prediction we prove consistency results of the following type. We do not assume the existence of a "correct" hypothesis but rather show that our method competes well with a class of some other predictors, such as all linear regressors. Therefore identifying and describing such natural classes is an important task.
In Hilbert spaces we have a natural equivalent of linear regressors. Those are linear functionals, or, as the Riess-Fischer Representation Theorem shows, scalar products by an element . After we have mapped into a Hilbert space , we can consider predictors of the form .
Theorem 6 immediately implies that the class of such functions coincides with the RKHS. Indeed, there is a unique decomposition , where is the projection of on and is orthogonal to . We have
We may want to assign the norm to the predictor ; clearly . The space of predictors thus obtained does not exceed the RKHS and the norms of predictors are equal to or greater than those of the elements of the RKHS. Thus regressors in the feature space have no more power than functions from the RKHS.
We get the following theorem as a bonus.
In this section we revisit the ridge regression problem from Section 1 and present one argument of great importance for competitive prediction.
Suppose that we have a sequence of signals and outcomes as in Section 1. On top of that suppose that we have a mapping from the set of signals into a Hilbert feature space . Take ; as we said before, it can be considered as a regressor yielding the dependency . We may ask if there is minimising the expression
Consider the predictor
Let be a subspace of consisting of all linear combinations of (it is closed because it is finite-dimensional). Take . It can be decomposed into a sum , where is orthogonal to . For all we have ; we also have . Therefore . When minimising we can restrict ourselves to predictors from , i.e., linear combinations of !
Because is finite-dimensional, the arguments from Section 1 apply and the ridge regression turns out to provide the minimum of over (or the minimum of over the corresponding RKHS).
This argument can be generalised to the representer theorem. We shall formulate two variants of it.
Proof: The proof is by observing that the projection of on the subspace has the same scalar products with elements , and a smaller norm .
Corollary 1. Let be a kernel and the corresponding RHKS; let be given by
Consider an RKHS corresponding to a kernel . Let be a subspace of . Clearly, is a RKHS. This can be shown as follows. The evaluation functional is continuous on . Its restriction on should remain continuous and therefore is a RKHS. This does not contradict the uniqueness theorem. If is a proper subspace of , it is an RKHS for a different kernel .
Suppose that itself is a subspace of some Hilbert space of functions . As we discussed above, in applications such as machine learning it does not make much sense to consider spaces where the evaluation functional is not continuous, and therefore should be an RKHS with its own kernel too.
One can say that RKHSs form hierarchies: larger spaces have more power than smaller spaces. However each of them has its own kernel. In competitive prediction the natural competitors of a kernel method are the functions from the corresponding RKHS. Other RKHSs require the use of a different kernel.
The rest of this section contains some results clarifying the structure of this hierarchy.
Proof: Let and be the projection operators from to and , respectively.
Take a point . The evaluation functional on equals the scalar product by . It is easy to see that plays the same role in . Indeed, and for every function we have
We do this by showing that as a function of coincides with . Fix and denote the function by . We have . The above argument implies that for every and thus . The reproducing property follows.
Similarly is the kernel for .
Let . We have and ; therefore
It is easy to see that does not have to be a subspace of . Indeed, let and . Clearly if we take the set of functions constituting and equip it with the scalar product , we get the RKHS for . It is a subset but not a subspace of .
Proof: Let and mapping into Hilbert spaces and , respectively, be feature maps giving rise to the kernels and , i.e.,
The results of Subsect. 5.2 imply that the RKHS for coincides with the set of functions , where . Similarly, the RKHSs for and consist of all functions and , respectively, where and .
For every the element belongs to ( is the zero element of here). We have
The minimum is taken over pairs of ; clearly, we can take the minimum over all pairs of and such that ; indeed, .
Now let . Under this assumption every has a unique decomposition , where and . Indeed, if , then . The function of the left-hand side belongs to and the function on the right-hand side belongs to and therefore they are both equal to zero. Thus for every pair and we have .
Take . Then this equality implies that . Taking leads to . The norms on and coincide with the norm on ; the same should apply to the scalar product and thus and are subspaces rather than just subsets of .
Picking arbitrary and and applying Pythagoras theorem yields . Comparing this with the above equality implies that , i.e., and are orthogonal subspaces.
Let us introduce a relation on kernels on a set . We will write if the difference is a kernel.
If this relation holds, then for all . Indeed, since is a kernel, the matrix is positive semi-definite, i.e., . This observation justifies the notation to some extent and implies that is antisymmetric. Indeed, if and then for we get for all . Theorem 2 implies that for every from the RKHS of and every we have and thus is identically zero. This implies that .
Clearly, is transitive: if , then .
The theorems above imply that for kernels is closely linked with the relation on their RKHSs. However no direct correspondence has been shown. The following results close the gap.
This theorem follows from our previous results. Indeed, the square is given by the minimum of taken over all decompositions , where is the RKHS corresponding to the difference . Every can be represented as , which implies the inequality in the theorem.
The opposite result holds.
It is easy to see that the inequality on the norms cannot be omitted. Consider some kernel on and let . Let be the RKHS for . For every we have
The proof of the theorem is beyond the scope of this article and can be found in [Aronszajn, 1950], pp.355-356.
The representer theorem provides a motivation to the following analysis important for learning.
Suppose that we have a kernel and let be its RKHS. Consider a subset and let be the restriction of to . It is easy to see that is still a kernel. What can one say about the RKHS corresponding to ?
This question is important for learning for the following reason. Suppose that we are given a training sample (or a history of past signals and outcomes, if we are using an on-line protocol) and a new signal . One can consider different sets containing the observed signals . What effect will the choice of a particular have on the consistency analysis of a learning method? The representer theorem above suggests that the choice of is essentially irrelevant. Padding does not really change much. (Of course the situation becomes different if we need to make a statement valid for all s that can possibly occur.) We shall see that this is generally true.
Let us try and clarify the situation. First, does contain more functions on ? Are restrictions of functions from form a set richer than ?
This is easy to answer if we use the definition of a RKHS provided by Theorem 7. Let be a mapping such that for all (here is a Hilbert space as usual). Then coincides with the set of functions of the form , where ranges over and the norm of is the minimum of the norms of all elements corresponding to . Clearly represents as well as : we have for all . Thus consists of functions of the form restricted to .
This implies that the restrictions of all functions from to belong to . On the other hand for every there is such that and we can extend to by considering scalar products with the same . Therefore each function from is a restriction of some function from . The norm of does not exceed that of each extension; on the other hand it equals to the norm of some and therefore to the norm of some extension.
The mapping from to provided by the restriction operator does not have to be injective. Indeed two elements can have the same scalar products with all elements from but different scalar products with some elements of . One can easily identify a subspace of isomorphic to though.
Proof: Let us denote the restriction operator by . It maps each into its restriction .
We already know that maps into and it does not increase the norm. It is easy to see that it is linear. We need to show that as a mapping from into is injective, surjective, and actually preserves the norm.
Consider the set of finite linear combinations of the form , where . We can consider them as elements of and elements of . It is easy to see that "" these linear combinations, i.e., (note that on the left the combination is considered as an element of and on the right the same combination is an element of ).
Let us show that restricted to functions that are finite linear combinations of the above form is an isomorphism. Its surjectivity follows from the above. On the other hand we have
Now consider on the subspace . Let ; it can be approximated by a linear combination to any degree of precision w.r.t. the norm . However does not increase the norm and we therefore have
To prove the surjectivity we need to remember that the finite linear combinations are dense in . If , it is the limit of a sequence of finite linear combinations. Their inverse images (i.e., the same combinations) in will form a fundamental sequence because the norm is preserved and therefore will converge. The image of the limit has to be the original .
Corollary 2. The orthogonal complement of w.r.t. consists of all functions such that for all .
Proof: Consider and . We have . If is orthogonal to , it is orthogonal to each and therefore for all . If on , then it is orthogonal to all , where , all their linear combinations and therefore to all elements of .
Suppose that we have two kernels and . Let be the Cartesian product . We can consider the function defined by
We shall prove this theorem later. Let us first derive some corollaries.
Corollary 3. Let and be two kernels defined on the same domain . Then their componentwise product defined by
Proof: Inside the direct product consider the "diagonal" set defined by . The restriction of the tensor product coincides with the componentwise product on . Clearly, it is therefore a kernel.
Theorem 13 implies that the RKHS corresponding to the componentwise product consists of restrictions of functions from the RKHS corresponding to the tensor product to .
We obtain the following linear algebra result as a bonus.
Corollary 4. The componentwise product of two symmetric positive-definite matrices is symmetric and positive-definite.
The corollary can be proved by considering a symmetric positive-definite matrix as a kernel on a finite set.
Let us prove the theorem about the tensor product. We shall prove it by constructing the RKHS for the product . The space is the tensor product of RKHSs with the scalar product corresponding to and with the scalar product corresponding to . We cannot employ the standard construction (see e.g., [Weidmann, 1980], Section 3.4) directly because we need to ensure that the resulting space is still a space of functions rather than a generic Hilbert space.
Consider the set of functions on of the form , where and . As often happens when we try to construct a tensor product of some spaces, this set is not necessarily closed under addition. Therefore let us consider the space of finite sums , where is a positive integer, and . It is clearly a vector space.
Let us define the scalar product of and by
It still remains to show that satisfies the properties of a scalar product. It is easy to check that it is a bilinear form. We need to show that and vanishes only for . Let us take and apply an orthonormalisation procedure (e.g., the Gram-Schmidt method) to in and to in . After renaming the functions, we get a representation , where, for both and , the equality holds for all and holds for all . For the scalar product we have
Let us now find out the role of the product of kernels defined by
For we get
The resulting space with a scalar product can be incomplete, so we need to construct its completion; the situation is somehow similar to that with the existence theorem. We cannot use the standard result about completions of pre-Hilbert spaces, because we need to obtain a space of functions.
In order to construct the completion, we need to recall some facts about orthonormal bases in Hilbert spaces. Let be a (pre-)Hilbert space with a scalar product . An orthonormal system is a system of vectors , where is some index set, not necessarily countable, such that equals 0 for and 1 for . It is easy to check that all finite subsets of an orthonormal system are linearly independent.
If is an orthonormal system, then for each only countably many scalar products are non-zero and the Bessel inequality holds.
An orthonormal system is an orthonormal base, if its span (the set of all finite linear combinations of its elements) is dense in , i.e., . An orthonormal system is an orthonormal base if and only if for every the Parseval equality holds. The Parseval equality implies that . (The convergence of this series can be understood as follows. If , is a sequence of (pairwise different) indexes including all such that , then as .) The scalar product of is given by . This generalises the finite-dimensional facts about the coordinate decomposition of a vector and its length.
Each Hilbert space has an orthonormal base; this follows from Zorn's lemma. This system is countable if and only if is separable.
Let vectors , form an orthonormal system in a Hilbert space. The series converges if and only if the sequence belongs to . Each separable Hilbert space is thus isomorphic to . As a matter of fact, any Hilbert space is isomorphic to , where is the index set of its orthonormal base. The space consists of systems of reals , where only countably many elements are non-zero and . The addition and multiplication by a constant are defined componentwise. The scalar product of and is .
Let us now build the completion. Take an orthonormal base in and an orthonormal base in . The index sets and do not have to be countable 11 Consider the class of functions of the form , where only countably many coefficients are non-zero and . Let us show that this series is absolutely convergent for each . Fix ; the Cauchy(-Schwarz-Bunyakovsky) inequality implies that
Take and fix the first coordinate. The function belongs to as a converging sum of . (Indeed, the sum converges and .) We can calculate the scalar product
Let us show that contains all products of the form , where and . Indeed, if and then and the sum
One can conclude, by continuity, that
In this section we summarise some sufficient conditions for a function to be a kernel and obtain some popular kernels.
Let us recall the facts we know and formulate simple corollaries.
This is Theorem 4 or the existence theorem.
Corollary 5. If is a kernel and , then is a kernel.
Corollary 6. If and are kernels on the same set, then their sum is a kernel on the set.
Corollary 7. The pointwise limit of kernels is a kernel.
Proof: Let be the pointwise limit of and each is a kernel.
Because is symmetric, is symmetric too. For any finite array and we have . As , we get and thus is positive semi-definite.
This was proven as Corollary 3.
Corollary 8. Let be a kernel and is a polynomial with positive coefficients. Then is a kernel.
Note the requirement that the coefficients should be positive. Indeed, is not a kernel unless is identically equal to zero.
Corollary 9. If is a kernel, then is a kernel.
Proof: The exponent can be decomposed into Taylor's series on . The function is a pointwise limit of polynomials on and all these polynomials have positive coefficients.
Corollary 10. If is a real vector-function on , then
Corollary 11. The function is a kernel on , .
This fact is obvious and can be derived by many different ways.
We can now obtain Vapnik's inhomogeneous polynomial kernel.
Corollary 12. For any positive integer the function is a kernel on .
Proof: Indeed, is a kernel and is a polynomial with positive coefficients.
The following corollary introduces radial-basis function (rbf) kernel.
Corollary 13. For any , the function , where is the Euclidean norm, is a kernel on .
Proof: We have
1 As different, for example, to the classification problem, where s belong to a finite set.
2 Indeed, let be positive definite. If is singular, for some , but this implies .
3 The condition is not just sufficient, but necessary for to be singular. Indeed, if is singular, it is not positive definite and for some . This implies , i.e., columns of are linearly dependent.
4 It is important that no particular structure is postulated on ; throughout the most of this article it is just a set.
5 A definition and a discussion were given above.
6 We have .
7 Thus is not an RKHS according to our definition: it is disqualified without a substantive consideration.
8 The book [Schölkopf and Smola 2002] uses this argument in Section 2.2.3, p. 35-36, in a rather misleading way.
9 Proposition 2.14 from [Schölkopf and Smola 2002] states that continuous maps exist with a reference to [Parthasary and Smith, 1972], where the stronger statement is proven, but the weaker statement is claimed.
10 Aronszajn uses the term "direct product". The reason why we use the term "tensor product" will become apparent later
11 Paper [Aronszajn, 1950] considers only the countable case and its proof is formally valid only for separable RKHSs. However minor amendments can make it valid everywhere.