Dot-Product kernel.
The DotProduct kernel is non-stationary and can be obtained from linear regression by putting :math:`N(0, 1)` priors on the coefficients of :math:`x_d (d = 1, . . . , D)` and a prior of :math:`N(0, \sigma_0^2)` on the bias. The DotProduct kernel is invariant to a rotation of the coordinates about the origin, but not translations. It is parameterized by a parameter sigma_0 :math:`\sigma` which controls the inhomogenity of the kernel. For :math:`\sigma_0^2 =0`, the kernel is called the homogeneous linear kernel, otherwise it is inhomogeneous. The kernel is given by
.. math:: k(x_i, x_j) = \sigma_0 ^ 2 + x_i \cdot x_j
The DotProduct kernel is commonly combined with exponentiation.
See 1
_, Chapter 4, Section 4.2, for further details regarding the DotProduct kernel.
Read more in the :ref:`User Guide <gp_kernels>`.
.. versionadded:: 0.18
Parameters ---------- sigma_0 : float >= 0, default=1.0 Parameter controlling the inhomogenity of the kernel. If sigma_0=0, the kernel is homogenous.
sigma_0_bounds : pair of floats >= 0 or 'fixed', default=(1e-5, 1e5) The lower and upper bound on 'sigma_0'. If set to 'fixed', 'sigma_0' cannot be changed during hyperparameter tuning.
References ---------- .. 1
`Carl Edward Rasmussen, Christopher K. I. Williams (2006). 'Gaussian Processes for Machine Learning'. The MIT Press. <http://www.gaussianprocess.org/gpml/>`_
Examples -------- >>> from sklearn.datasets import make_friedman2 >>> from sklearn.gaussian_process import GaussianProcessRegressor >>> from sklearn.gaussian_process.kernels import DotProduct, WhiteKernel >>> X, y = make_friedman2(n_samples=500, noise=0, random_state=0) >>> kernel = DotProduct() + WhiteKernel() >>> gpr = GaussianProcessRegressor(kernel=kernel, ... random_state=0).fit(X, y) >>> gpr.score(X, y) 0.3680... >>> gpr.predict(X:2,:
, return_std=True) (array(653.0..., 592.1...
), array(316.6..., 316.6...
))