{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "Gaussian process regression \n", "========================\n", "\n", "We have looked at the problem of fitting functions with parameters to given data points. We assumed that the function in question was an adequate model of the underlying data. The function's parameters were optimized to represent the data as well as possible. So far so good.\n", "\n", "However, what if we do not have an idea of an adequate function? So-called **Gaussian processes** provide an alternative view. Instead thinking of a function that is underlying the generation of the data, now we ask what the statistical relation is among the data points. Specifically, we think of the - say - $n$ data point as a sample drawn from an $n$-variate Gaussian distribution (in fact of zero mean but this is not essential) - hence the name Gaussian process. The art now is to come up with an **adequate covariance matrix** describing the covariance of the data. You may consider the choice of the covariance matrix as the analog of choosing a fitting function. However, in the case of Gaussian processes the link to the data is less immediate, and as such less constraining.\n", "\n", "Besides regression problems as discussed here, Gaussian processes can also be used for classification (which are not discussed here). All in all Gaussian processes are very versatile so that it is worthwhile to have a closer look. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "library(mvtnorm) # multivariate Gaussian\n", "library(repr)\n", "options(repr.plot.width=8, repr.plot.height=5.5) # set plot size" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Towards the end of this lecture you are asked to work on the following question: **what do you predict as global atmospheric CO2 concentration in the year 2030**? Obviously, a question of great relevance. To make your prediction, you have available measurements taken monthly on the vulcano Mauna Loa on Hawai'i as shown below. " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "options(repr.plot.width=15, repr.plot.height=5.5) \n", "d <- read.table(\"monthly_in_situ_co2_mlo.csv\", skip=57, sep=\",\")\n", "mask <- d$V5 > 0\n", "year <- d$V4[mask]\n", "co2 <- d$V5[mask]\n", "\n", "plot(year,co2, main=\"Mauna Loa CO2 measurements\", xlab=\"year\", ylab=\"CO2 concentration [ppm]\", type='l')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Of course, you are supposed to do this with Gaussian process regression. But first things first, starting with a simple example explaining the concept." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$\\newcommand{\\veci}[1]{\\mathbf{#1}}$\n", "$\\newcommand{\\vecii}[2]{\\left[\\begin{array}{l}\\mathbf{#1}\\\\\\mathbf{#2}\\end{array}\\right]}$\n", "$\\newcommand{\\matii}[4]{\\left[\\begin{array}{l}#1\\:\\:\\:#2\\\\#3\\:\\:\\:#4\\end{array}\\right]}$\n", "\n", "In the simplest form, a Gaussian process assumes that the data $\\veci{x}$ are distributed according to a normal distribution of zero mean and covariance matrix $D$. Symbols in **bold face** are vectors, **capital letters** denote matrices.\n", "\n", "$$\\mathbf{x} \\sim {\\cal{N}}\\left(\\veci{m},\\mathit{D}\\right) \\hspace{2em}\\mathrm{with}\\hspace{2em}\n", "p(\\veci{x})= (2\\pi)^{-n/2} |D|^{-1/2} \\exp\\left(-\\frac{1}{2} \n", "(\\veci{x}-\\veci{m})^\\mathrm{T}D(\\veci{x}-\\veci{m})\\right)$$\n", "\n", "where $n$ is the dimension of $\\veci{x}$. Now we split the data vector $\\veci{x}$ into two parts $\\veci{a}$ and $\\veci{b}$, the covariance matrix $D$ accordingly:\n", "\n", "$$ \\vecii{a}{b} \\sim \\cal{N}\\left(\\vecii{m_a}{m_b}, \\matii{A}{C^\\mathrm{T}}{C}{B}\\right)$$\n", "\n", "If $\\veci{a}$ is an $n$-element vector, $A$ is an $n\\times n$ matrix; similarly, if $\\veci{b}$ is an $m$-element vector, $B$ an $m\\times m$-matrix. $(.)^\\mathrm{T}$ denotes the transpose. Remember, that a covariance matrix has to be symmetric and positive definite.\n", "\n", "The marginal distribution of $\\veci{b}$ marginalizing over $\\veci{a}$ is\n", "\n", "$$\\veci{b} \\sim {\\cal{N}}\\left(\\veci{m_b},B\\right)$$\n", "\n", "(analogously for $\\veci{a}$). And now, the most important result for the conditional distribution of $\\veci{b}$ given $\\veci{a}$\n", "\n", "$$\\veci{b}|\\veci{a} \\sim {\\cal{N}} \\left(\\veci{m_b} + C A^{-1}(\\veci{a}-\\veci{m_a}), B-C A^{-1} C^\\mathrm{T}\\right)$$\n", "\n", "You may vaguely remember that at the beginning of the course when discussing the multivariate Gaussian distribution I stated that its marginal and conditional distributions can be expressed analytically. Here are finally the explicite formulae. Particularly the conditional distribution we will put to use now. For the tireless ones it might of interest to hear that $C A^{-1}\\veci{a}$ is the \"matrix of regression coefficients\", and $B-C A^{-1} C^\\mathrm{T}$ the \"Schur complement of A in D\". \n", "\n", "Keeping things simple, and without great loss of generality we assume $\\veci{a}\n", "= \\veci{b} = \\veci{0}$ so that \n", "\n", "$$\\veci{b}|\\veci{a} \\sim {\\cal{N}} \\left(C A^{-1}\\veci{a}, B-C A^{-1} C^\\mathrm{T}\\right)$$\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**A simple example using a Gaussian process for interpolation and extrapolation:**" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "x <- c(-1.5, -1.0, -0.75, -0.4, -0.25, 0.0) # some x-values\n", "nx <- length(x)\n", "y <- 0.55*c(-3, -2, -0.6, 0.4, 1.0, 1.6) # some y-values \n", "sigy <- 0.3 # uncertainty of y-values (used later)\n", "xstar1 <- 0.2 # point to extrapolate to\n", "xstar2 <- -0.5 # point to interpolate" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "plot_xy <- function(qmark=FALSE) {\n", " plot(x, y, col=\"red\", pch=16, ylim=c(-2.5,2.5), xlim=c(-1.6,0.3))\n", " arrows(x0=x, y0=y-sigy, x1=x, y1=y+sigy, code=3, angle=90, length=0.03, col=\"red\")\n", " if (qmark) {\n", " points(xstar1, 0.9, pch=16, col=\"blue\")\n", " text(xstar1,0.4, labels='?', col=\"blue\", cex=2)\n", " points(xstar2, 0.0, pch=16, col=\"blue\")\n", " text(xstar2,-0.5, labels='?', col=\"blue\", cex=2)\n", " }\n", " return(invisible(0))\n", "} \n", "plot_xy(qmark=TRUE)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "$\\newcommand{\\veci}[1]{\\mathbf{#1}}$\n", "\n", "Key question now is: **how do we model the covariance among the (red) data points?**\n", "\n", "- we describe a covariance matrix $K$ with a so-called **kernel** function $k$ as\n", "\n", "$$ K = \\left[\\begin{array}{c} \n", "k(x_1,x_1) & k(x_1,x_2) & \\cdots & k(x_1, x_n)\\\\\n", "\\vdots & \\vdots & \\ddots & \\vdots \\\\\n", "k(x_2,x_1) & k(x_2,x_2) & \\cdots & k(x_2, x_n)\\\\\n", "k(x_n,x_1) & k(x_n,x_2) & \\cdots &k(x_n, x_n)\n", "\\end{array}\\right]$$\n", "\n", "- for simplicity (again not necessary but sufficient here) we assume that the covariance among data points does not depend on absolute position $x$ but only on their **distances** $k(x_i,x_j)=k(|x_i-x_j|)$, in this case one speaks of a **stationary kernel**\n", "- the kernel functions need to be positive definite to produce a positive definite covariance matrix\n", " - in analogy to positive definite marices this means that for every square integrable function $f$\n", " $$ \\int_{-\\infty}^{+\\infty}\\int_{-\\infty}^{+\\infty} \n", " k(x,x^\\prime)\\,f(x)\\,f(x^\\prime)\\,dx\\,dx^\\prime > 0$$\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dx <- outer(x,x,'-') # matrix of differences between all x-values\n", " # note convenient use of outer\n", "xfine <- seq(-1.6,0.3,0.01) # xfine later needed for plotting\n", "nxfine <- length(xfine)\n", "dxfine <- outer(xfine,xfine,'-')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here come the first suggestions for kernels. Note their **dependence on parameters**. In the context of Gaussian processes they are called **hyperparameters**. \"hyper\" sounds like magic. However, what is actually meant is that they influence the final functions only indirectly. Perhaps \"metaparameters\" had been a better name for describing their roles.\n", "\n", "`kn` is a kernel function that is later put to use for taking care of the uncertainties in the data." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k1 <- function(sigf,l,dx) sigf^2*exp(-0.5*(dx/l)^2) # squared exponential, very often first and only trial \n", "kn <- function(sigy,dx) sigy^2*diag(1.0,nrow(dx),ncol(dx)) # for noise contribution" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Trying now `k1` with two more or less arbitrily chosen hyperparameters shows that the resulting covariance matrix is positive definite. All eigenvalues are positive:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "eigen(k1(0.5,2,dx))$values" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "One can draw arbitrary random vectors drawn from a multivariate normal distribution for a prescribed covariance function - here k1. Unfortunately, at this point they have nothing to do with our x-y-data (yet) but by playing with the hyperparameters one may get a sense for the flexibility of the resulting functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": { "scrolled": true }, "outputs": [], "source": [ "sigf <- 1.0 # amplitude\n", "l <- 0.3 # scale in x-direction\n", "plot_xy()\n", "for (i in 1:20) lines(xfine,rmvnorm(1, replicate(nxfine,0.0), k1(sigf,l,dxfine)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "In principle, we now could draw a large number of random samples and select only those that pass through the data points. Obviously, that is *very* inefficient. Here the above formula for the conditional probability comes to a rescue. We draw only random samples constraint to pass through the data points. \n", "\n", "Let's try that ..." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigf <- 2.0\n", "l <- 0.5\n", "\n", "ymean <- mean(y) # OBS: here handling of mean of the data\n", " # Gaussian distribution with zero mean assumed\n", "K <- k1(sigf,l,dx)\n", "Ks <- k1(sigf,l,outer(xfine,x,'-'))\n", "Kss <- k1(sigf,l,outer(xfine,xfine,'-'))\n", "invK <- solve(K)\n", "ys <- drop(Ks %*% invK %*% (y-ymean)) + ymean\n", "covys <- Kss - Ks %*% (invK %*% t(Ks))\n", "\n", "plot_xy()\n", "for (i in 1:20) lines(xfine,rmvnorm(1, ys, covys))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "$\\newcommand{\\veci}[1]{\\mathbf{#1}}$\n", "$\\newcommand{\\vecii}[2]{\\left[\\begin{array}{l}\\mathbf{#1}\\\\\\mathbf{#2}\\end{array}\\right]}$\n", "$\\newcommand{\\matii}[4]{\\left[\\begin{array}{l}#1 & #2\\\\#3 & #4\\end{array}\\right]}$\n", "\n", "In the calculation above I already used some auxiliary quantities that I have not defined yet. This is done here ...\n", "\n", "Let's think of data points $y_i=y(x_i)$ and one point $y_\\ast$ located at position $x_\\ast$ that we want to predict. Besides the matrix $K$ from above we need\n", "\n", "$$K_\\ast \\equiv \\left[k(x_\\ast,x_1)\\: k(x_\\ast,x_2) \\cdots k(x_\\ast,x_n)\\right]$$\n", "\n", "and \n", "\n", "$$K_{\\ast\\ast}\\equiv k(x_\\ast,x_\\ast)$$\n", "\n", "We have written $K_\\ast$ and $K_{\\ast\\ast}$ as matrices (despite they are here a row vector and scalar, respectively) since they become matrices if $y_{\\ast}$ becomes a vector. We can now construct the covariance matrix $D$ \n", "\n", "$$ D = \\matii{K}{K_\\ast^\\mathrm{T}}{K_\\ast}{K_{\\ast\\ast}} $$\n", "\n", "So that we get from the formula for the conditional distribution for the distribution $y_\\ast$ given the observed data $y$\n", "\n", "$$ y_\\ast|\\veci{y} \\sim {\\cal{N}} \\left(K_\\ast K^{-1}\\veci{y}, K_{\\ast\\ast}-K_\\ast K^{-1}K_\\ast^\\mathrm{T}\\right)$$\n", "\n", "Our best estimate for $y_\\ast$ is the mean of the distribution\n", "\n", "$$\\hat{y}_\\ast = K_\\ast K^{-1}\\veci{y}$$\n", "\n", "with a variance\n", "\n", "$$\\sigma^2[y_\\ast] = K_{\\ast\\ast}-K_\\ast K^{-1}K_\\ast^\\mathrm{T}$$\n", "\n", "\n", "There are things to note\n", "\n", "- $\\hat{y}_\\ast$ is modelled as *linear* superposition of the data values $\\veci{y}$\n", "\n", "- the influence of $\\veci{y}$ on $\\hat{y}_\\ast$ is dependent on the covariance $K_\\ast$ between both\n", "\n", "- the Gaussian process makes a statement on the **precision** of the resulting prediction, very useful! " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's make the representation of the uncertainties more explicit, and looking nicer:" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigf <- 2.0\n", "l <- 0.5\n", "\n", "k <- k1 # use squared exponential kernel\n", "\n", "ymean <- mean(y) # OBS: here handling of mean of the data\n", " #subtracted since Gaussian process with zero mean considered\n", "K <- k(sigf,l,dx) # + kn(sigy,dx) # here: interpolation vs. fitting\n", "Ks <- k(sigf,l,outer(xfine,x,'-'))\n", "Kss <- k(sigf,l,outer(xfine,xfine,'-'))\n", "invK <- solve(K)\n", "ys <- drop(Ks %*% invK %*% (y-ymean)) + ymean\n", "covys <- Kss - Ks %*% (invK %*% t(Ks))\n", "\n", "plot_xy()\n", "lines(xfine, ys, type=\"l\", col=\"green\")\n", "points(xstar1, ys[181], col=\"blue\", pch=16)\n", "points(xstar2, ys[111], col=\"blue\", pch=16)\n", "\n", "d <- 2.0*sqrt(abs(diag(covys))) # 2*sigma pointwise uncertainties\n", "polygon(c(xfine,rev(xfine)), c(ys-d,rev(ys+d)), border=NA, \n", " col = rgb(red=0.2,green=0.8,blue=0,alpha=0.3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Choice of various (hyper) parameters sigf, l, ... possible. Which one should one pick? \n", "\n", "Use MLE to obtain optimal parameters by maximizing the log likelihood of the data $\\veci{y}$ given at locations $\\veci{x}$ by varying the hyperparameters $\\veci{\\theta}$\n", "\n", "$$\\ln p(\\veci{y}|\\veci{x},\\veci{\\theta}) = -\\frac{1}{2}\\left(\\veci{y}^\\mathrm{T} K^{-1}\\veci{y} + \\ln |K| + n \\ln 2\\pi\\right)$$\n", "\n", "$n$ is the number of data points." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "logl <- function(theta) {\n", " K <- k1(theta[1],theta[2],dx) + kn(sigy,dx) \n", " # \"-\" sign left out since optim() *minimizes* a function\n", " return(0.5* (t(y) %*% solve(K) %*% y + log(det(K)) + nx*log(2*pi)))\n", "} \n", "\n", "(o <- optim(c(sigf=2.0,l=0.3), logl, method=\"Nelder-Mead\"))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sigf <- o$par['sigf']\n", "l <- o$par['l']\n", "\n", "k <- k1\n", "\n", "ymean <- mean(y)\n", "K <- k(sigf,l,dx) + kn(sigy,dx) # here: interpolation vs. fitting\n", "Ks <- k(sigf,l,outer(xfine,x,'-'))\n", "Kss <- k(sigf,l,outer(xfine,xfine,'-'))\n", "invK <- solve(K)\n", "ys <- drop(Ks %*% invK %*% (y-ymean)) + ymean\n", "covys <- Kss - Ks %*% (invK %*% t(Ks))\n", "\n", "plot_xy()\n", "lines(xfine, ys, type=\"l\", col=\"green\")\n", "points(xstar1, ys[181], col=\"blue\", pch=16)\n", "points(xstar2, ys[111], col=\"blue\", pch=16)\n", "\n", "# OBS: approach below gives problems since covys not (numerically) positive definite \n", "#for (i in 1:100) lines(xfine, ys + rmvnorm(1, replicate(nxfine,0.0), covys), \n", "# col = rgb(red=1,green=1,blue=0,alpha=0.3))\n", "\n", "d <- 1.96*sqrt(abs(diag(covys))) # 2*sigma pointwise uncertainties\n", "polygon(c(xfine,rev(xfine)), c(ys-d,rev(ys+d)), border=NA, \n", " col = rgb(red=0.2,green=0.8,blue=0,alpha=0.3))" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "GPtrain <- function(kernel,theta,x,y,noise=0) {\n", " dx <- outer(x,x,'-')\n", " nx <- length(x)\n", " yq <- y-mean(y)\n", " logl <- function(theta) { \n", " K <- kernel(theta,noise,dx)\n", " Lup <- chol(K)\n", " invKy <- backsolve(Lup,forwardsolve(t(Lup),yq))\n", " return(0.5* (t(yq) %*% invKy + 2.0*sum(log(diag(Lup))) + nx*log(2*pi)))\n", " }\n", " return(optim(theta,logl,method=\"Nelder-Mead\"))\n", "} \n", "\n", "GPpred <- function(kernel,theta,x,y,xs,noise=0) { \n", " ymean <- mean(y)\n", " K <- kernel(theta, noise, dx=outer(x,x,'-'))\n", " Ks <- kernel(theta, 0, dx=outer(xs,x,'-'))\n", " Kss <- kernel(theta, 0, dx=outer(xs,xs,'-'))\n", " Lup <- chol(K)\n", " invKy <- backsolve(Lup,forwardsolve(t(Lup),(y-ymean)))\n", " ys <- drop(Ks %*% invKy) + ymean\n", " v <- apply(Ks, MARGIN=1, FUN=forwardsolve, l=t(Lup))\n", " covys <- Kss - t(v) %*% v\n", " return(list(ys=ys,covys=covys))\n", "}" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "sqexp_with_noise <- function(theta, noise, dx) {\n", " sigf <- theta[1]\n", " l <- theta[2]\n", " if (noise > 0) return(k1(sigf,l,dx)+kn(noise,dx))\n", " else return(k1(sigf,l,dx))\n", "}\n", "\n", "o <- GPtrain(sqexp_with_noise,c(1.0,0.3),x,y,noise=sigy)\n", "cat(\"Optimal hyperparameters: \", o$par)\n", "p <- GPpred(sqexp_with_noise,o$par,x,y,xfine,noise=sigy)\n", "plot_xy()\n", "lines(xfine,p$ys,type=\"l\", col=\"green\")\n", "d <- 1.96*sqrt(abs(diag(p$covys))) # 2*sigma pointwise uncertainties\n", "polygon(c(xfine,rev(xfine)), c(p$ys-d,rev(p$ys+d)), border=NA, \n", " col = rgb(red=0.2,green=0.8,blue=0,alpha=0.3))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**More kernels** are give below ...\n", "\n", "Moreover: sums and products of kernel functions are again kernel functions." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "k2 <- function(sigf,l1,l2,dx) sigf^2*exp(-0.5*(dx/l1)^2-2.0*(sin(pi*dx)/l2)^2) # damped periodic kernel \n", " # with period 1\n", "k4 <- function(sigf,l,dx) sigf^2*exp(-2.0*(sin(pi*dx)/l)^2) # periodic kernel with period 1\n", "k3 <- function(sigf,l,alpha,dx) sigf^2*(1+0.5*(dx/l)^2/alpha)^alpha # rational quadratic kernel\n", "k5 <- function(sigf,l,dx) { # Matérn 3/2\n", " d <- sqrt(3)*abs(dx)/l\n", " return(sigf*(1+d)*exp(-d))\n", "}\n", "k6 <- function(sigf,l,dx) { # Matérn 5/2\n", " d <- sqrt(5)*abs(dx)/l\n", " return(sigf*(1+d+(d^2)/3)*exp(-d))\n", "}" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Let's try and see what the kernels look like, here k2 and k5" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "dx <- seq(-10,10,0.05)\n", "plot(dx, k2(1.0, 2.0, 1.0, dx), type='l')\n", "lines(dx, k5(0.5,1.0,dx), type='l', col='red')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "**General remarks on Gaussian process regression**\n", "\n", "- Gaussian process regression computationally demanding (big matrices involved)\n", " - beyond few 1000 data points special techniques necessary\n", "- Fairly trivial to extend to vectorial input ($x$)\n", "- Fairly tricky to extend to vectorial output ($y$)\n", "- If things went by much too fast have a look at the tutorial by Ebden on the UKSta website\n", "- Danie Gerhardus Krige, South African, \"kriging\", mining engineer, optimizing search for mineral deposits (gold!)\n", "- Bertil Matérn, Swede, forestry statistics" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "R", "language": "R", "name": "ir" }, "language_info": { "codemirror_mode": "r", "file_extension": ".r", "mimetype": "text/x-r-source", "name": "R", "pygments_lexer": "r", "version": "3.5.2" } }, "nbformat": 4, "nbformat_minor": 4 }