{ "cells": [ { "cell_type": "markdown", "id": "cae180ff", "metadata": {}, "source": [ " # 第 7 章 高斯过程\n", "\n", "\n", "\n", "在上一章中,我们学习了狄利克雷过程,它是狄利克雷分布的无限维推广,可用于设置未知连续分布的先验。在本章中,我们将学习高斯过程,这是高斯分布的无限维推广,可用于设置未知函数的先验。在贝叶斯统计中,狄利克雷过程和高斯过程都被用来建立灵活的模型,其中允许参数的数量随着数据的大小而增加。\n", "\n", "本章我们将学习:\n", "\n", "- 函数作为概率对象\n", "- 协方差函数与核\n", "- 具有高斯似然的高斯过程\n", "- 具有非高斯似然的高斯过程\n", "---\n", "\n", "## 7.1 线性模型和非线性数据\n", "\n", "在第3章 “线性回归建模” 和 第4章 “广义线性回归模型” 中,介绍了如何构建一般形式的模型:\n", "\n", "\n", "$$\n", "\\theta=\\psi(\\phi(X) \\beta) \\tag{式7.1} \n", "$$\n", "\n", " 式 7.1 中, $\\theta$ 是因变量概率分布的某个参数,如高斯分布的平均值 $\\mu$ 、二项式的参数 $p$、泊松分布的比率 $\\lambda$ 等。$\\psi$ 为逆连接函数,$\\phi$ 是平方根或多项式函数,$\\beta$ 为线性回归的权重参数。对于最简单的一元线性回归情况,$\\psi$ 为恒等函数。\n", "\n", "对上述贝叶斯模型的拟合可以视为通过推断得到权重参数 $\\beta$ 的后验分布,因此,被称为估计的`权重视角(Weight View)`。以多项式回归为例,通过设计非线性函数 $\\phi$ ,可将输入映射到特征空间,然后在特征空间中拟合一个在实际空间中非线性的线性关系。理论上通过使用适当阶次的多项式,总是可以完美拟合任何函数。但除非应用一定形式的正则化(如使用强先验),否则多项式回归很容易导致过拟合,进而模型泛化能力变差。\n", "\n", "`高斯过程` 让数据决定函数的复杂度,在避免(或最小化)过拟合问题同时,为任意函数的建模提供了理论解决方案,为复杂问题建模提供了一种 `函数视角 (Function View)` 。\n", "\n", "以下各节从实用角度解释了高斯过程,几乎避免了所有数学知识。有关更正式的解释,请查看`第 9 章 『下一步去哪里?』`中列出的资源。\n", "\n", "## 7.2 什么是高斯过程?\n", "\n", "### 7.2.1 如何为函数建模?\n", "\n", " 我们将首先描述一种将函数表示为概率对象的方法,以开始对高斯过程的讨论。可以把函数 $f$ 看作是从一组输入 $x$ 到一组输出 $y$ 的映射。因此,可以这样写:\n", "\n", "$$\n", "y=f(x) \\tag{式7.2} \n", "$$\n", "\n", "表示函数的一种方式是为每个值 $x_i$ 列出其相应值 $y_i$ 。事实上,你可能还记得小时候这种函数的表示方式:\n", "\n", "
\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505201816_b3.webp)\n", "\n", "
\n", "\n", "一般情况下,$x$ 和 $y$ 的值将位于实数行上;因此,可将函数视为成对$( x_i,y_i)$ 值的(可能)无限有序列表。这里顺序非常重要,打乱顺序将代表不同函数。\n", "\n", "函数也可以表示为一个由 $x$ 值索引的(可能)无限数组,但重要区别在于,$x$ 值不限于整数,可以取实数。\n", "\n", "采用这种描述方式,可以表示任何想要的函数。但如果想要表示函数的概率呢(例如:混合模型中的多个组分分别对应一个概率分布函数,通过描述函数的概率分布似乎可以对混合模型建模)?我们可以通过让映射具有概率性质来实现。让每个 $y_i$ 的值都是一个随机变量,该随机变量服从给定均值和方差的高斯分布。通过该方式,不再是对单个特定函数的描述,而形成对一系列分布族的描述。\n", "\n", "为了使讨论具体化,让我们使用一些 Python 代码来构建和绘制这类函数的两个示例:" ] }, { "cell_type": "code", "execution_count": 1, "id": "5d68c9ea", "metadata": {}, "outputs": [ { "ename": "AttributeError", "evalue": "module 'numpy.distutils.__config__' has no attribute 'blas_opt_info'", "output_type": "error", "traceback": [ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", "\u001b[0;31mNoSectionError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/configparser.py:238\u001b[0m, in \u001b[0;36mTheanoConfigParser.fetch_val_for_key\u001b[0;34m(self, key, delete_key)\u001b[0m\n\u001b[1;32m 237\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 238\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_theano_cfg\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget\u001b[49m\u001b[43m(\u001b[49m\u001b[43msection\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43moption\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 239\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m InterpolationError:\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/configparser.py:781\u001b[0m, in \u001b[0;36mRawConfigParser.get\u001b[0;34m(self, section, option, raw, vars, fallback)\u001b[0m\n\u001b[1;32m 780\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 781\u001b[0m d \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_unify_values\u001b[49m\u001b[43m(\u001b[49m\u001b[43msection\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;28;43mvars\u001b[39;49m\u001b[43m)\u001b[49m\n\u001b[1;32m 782\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m NoSectionError:\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/configparser.py:1149\u001b[0m, in \u001b[0;36mRawConfigParser._unify_values\u001b[0;34m(self, section, vars)\u001b[0m\n\u001b[1;32m 1148\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m section \u001b[38;5;241m!=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdefault_section:\n\u001b[0;32m-> 1149\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m NoSectionError(section) \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;28mNone\u001b[39m\n\u001b[1;32m 1150\u001b[0m \u001b[38;5;66;03m# Update with the entry specific variables\u001b[39;00m\n", "\u001b[0;31mNoSectionError\u001b[0m: No section: 'blas'", "\nDuring handling of the above exception, another exception occurred:\n", "\u001b[0;31mKeyError\u001b[0m Traceback (most recent call last)", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/configparser.py:354\u001b[0m, in \u001b[0;36mConfigParam.__get__\u001b[0;34m(self, cls, type_, delete_key)\u001b[0m\n\u001b[1;32m 353\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[0;32m--> 354\u001b[0m val_str \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfetch_val_for_key\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mname\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[43mdelete_key\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mdelete_key\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 355\u001b[0m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mis_default \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mFalse\u001b[39;00m\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/configparser.py:242\u001b[0m, in \u001b[0;36mTheanoConfigParser.fetch_val_for_key\u001b[0;34m(self, key, delete_key)\u001b[0m\n\u001b[1;32m 241\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m (NoOptionError, NoSectionError):\n\u001b[0;32m--> 242\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m(key)\n", "\u001b[0;31mKeyError\u001b[0m: 'blas__ldflags'", "\nDuring handling of the above exception, another exception occurred:\n", "\u001b[0;31mAttributeError\u001b[0m Traceback (most recent call last)", "Input \u001b[0;32mIn [1]\u001b[0m, in \u001b[0;36m\u001b[0;34m()\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mpandas\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01mpd\u001b[39;00m\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mseaborn\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01msns\u001b[39;00m\n\u001b[0;32m----> 6\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mpymc3\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01mpm\u001b[39;00m\n\u001b[1;32m 7\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01marviz\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m \u001b[38;5;21;01maz\u001b[39;00m\n\u001b[1;32m 9\u001b[0m az\u001b[38;5;241m.\u001b[39mstyle\u001b[38;5;241m.\u001b[39muse(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124marviz-darkgrid\u001b[39m\u001b[38;5;124m'\u001b[39m)\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/pymc3/__init__.py:23\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mplatform\u001b[39;00m\n\u001b[1;32m 22\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01msemver\u001b[39;00m\n\u001b[0;32m---> 23\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\n\u001b[1;32m 25\u001b[0m _log \u001b[38;5;241m=\u001b[39m logging\u001b[38;5;241m.\u001b[39mgetLogger(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mpymc3\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 27\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m logging\u001b[38;5;241m.\u001b[39mroot\u001b[38;5;241m.\u001b[39mhandlers:\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/__init__.py:83\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 75\u001b[0m \u001b[38;5;66;03m# This is the api version for ops that generate C code. External ops\u001b[39;00m\n\u001b[1;32m 76\u001b[0m \u001b[38;5;66;03m# might need manual changes if this number goes up. An undefined\u001b[39;00m\n\u001b[1;32m 77\u001b[0m \u001b[38;5;66;03m# __api_version__ can be understood to mean api version 0.\u001b[39;00m\n\u001b[1;32m 78\u001b[0m \u001b[38;5;66;03m#\u001b[39;00m\n\u001b[1;32m 79\u001b[0m \u001b[38;5;66;03m# This number is not tied to the release version and should change\u001b[39;00m\n\u001b[1;32m 80\u001b[0m \u001b[38;5;66;03m# very rarely.\u001b[39;00m\n\u001b[1;32m 81\u001b[0m __api_version__ \u001b[38;5;241m=\u001b[39m \u001b[38;5;241m1\u001b[39m\n\u001b[0;32m---> 83\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m scalar, tensor\n\u001b[1;32m 84\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcompile\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 85\u001b[0m In,\n\u001b[1;32m 86\u001b[0m Mode,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 93\u001b[0m shared,\n\u001b[1;32m 94\u001b[0m )\n\u001b[1;32m 95\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcompile\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mfunction\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m function, function_dump\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/__init__.py:20\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcompile\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m SpecifyShape, specify_shape\n\u001b[1;32m 10\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mgradient\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 11\u001b[0m Lop,\n\u001b[1;32m 12\u001b[0m Rop,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 18\u001b[0m verify_grad,\n\u001b[1;32m 19\u001b[0m )\n\u001b[0;32m---> 20\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m nnet \u001b[38;5;66;03m# used for softmax, sigmoid, etc.\u001b[39;00m\n\u001b[1;32m 21\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m sharedvar \u001b[38;5;66;03m# adds shared-variable constructors\u001b[39;00m\n\u001b[1;32m 22\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 23\u001b[0m blas,\n\u001b[1;32m 24\u001b[0m blas_c,\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 29\u001b[0m xlogx,\n\u001b[1;32m 30\u001b[0m )\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/nnet/__init__.py:3\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mwarnings\u001b[39;00m\n\u001b[0;32m----> 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m opt\n\u001b[1;32m 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mabstract_conv\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m conv2d \u001b[38;5;28;01mas\u001b[39;00m abstract_conv2d\n\u001b[1;32m 5\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mabstract_conv\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m conv2d_grad_wrt_inputs, conv3d, separable_conv2d\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/nnet/opt.py:32\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 24\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mnnet\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mblocksparse\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 25\u001b[0m SparseBlockGemv,\n\u001b[1;32m 26\u001b[0m SparseBlockOuter,\n\u001b[1;32m 27\u001b[0m sparse_block_gemv_inplace,\n\u001b[1;32m 28\u001b[0m sparse_block_outer_inplace,\n\u001b[1;32m 29\u001b[0m )\n\u001b[1;32m 31\u001b[0m \u001b[38;5;66;03m# Cpu implementation\u001b[39;00m\n\u001b[0;32m---> 32\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mnnet\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mconv\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m ConvOp, conv2d\n\u001b[1;32m 33\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mnnet\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcorr\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m CorrMM, CorrMM_gradInputs, CorrMM_gradWeights\n\u001b[1;32m 34\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mnnet\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mcorr3d\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Corr3dMM, Corr3dMMGradInputs, Corr3dMMGradWeights\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/nnet/conv.py:20\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 18\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mgraph\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbasic\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m Apply\n\u001b[1;32m 19\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mgraph\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mop\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m OpenMPOp\n\u001b[0;32m---> 20\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m blas\n\u001b[1;32m 21\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mbasic\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m (\n\u001b[1;32m 22\u001b[0m NotScalarConstantError,\n\u001b[1;32m 23\u001b[0m as_tensor_variable,\n\u001b[1;32m 24\u001b[0m get_scalar_constant_value,\n\u001b[1;32m 25\u001b[0m patternbroadcast,\n\u001b[1;32m 26\u001b[0m )\n\u001b[1;32m 27\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mnnet\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mabstract_conv\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m get_conv_output_shape, get_conv_shape_1axis\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/blas.py:163\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 161\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mscalar\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;28mbool\u001b[39m \u001b[38;5;28;01mas\u001b[39;00m bool_t\n\u001b[1;32m 162\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m basic \u001b[38;5;28;01mas\u001b[39;00m tt\n\u001b[0;32m--> 163\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mblas_headers\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m blas_header_text, blas_header_version\n\u001b[1;32m 164\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mopt\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m in2out, local_dimshuffle_lift\n\u001b[1;32m 165\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtheano\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtensor\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtype\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m values_eq_approx_remove_inf_nan\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/tensor/blas_headers.py:1016\u001b[0m, in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 997\u001b[0m header \u001b[38;5;241m+\u001b[39m\u001b[38;5;241m=\u001b[39m textwrap\u001b[38;5;241m.\u001b[39mdedent(\n\u001b[1;32m 998\u001b[0m \u001b[38;5;124;03m\"\"\"\\\u001b[39;00m\n\u001b[1;32m 999\u001b[0m \u001b[38;5;124;03m static float sdot_(int* Nx, float* x, int* Sx, float* y, int* Sy)\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 1010\u001b[0m \u001b[38;5;124;03m \"\"\"\u001b[39;00m\n\u001b[1;32m 1011\u001b[0m )\n\u001b[1;32m 1013\u001b[0m \u001b[38;5;28;01mreturn\u001b[39;00m header \u001b[38;5;241m+\u001b[39m blas_code\n\u001b[0;32m-> 1016\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[43mconfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mblas__ldflags\u001b[49m:\n\u001b[1;32m 1017\u001b[0m _logger\u001b[38;5;241m.\u001b[39mwarning(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mUsing NumPy C-API based implementation for BLAS functions.\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 1020\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mmkl_threads_text\u001b[39m():\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/configparser.py:358\u001b[0m, in \u001b[0;36mConfigParam.__get__\u001b[0;34m(self, cls, type_, delete_key)\u001b[0m\n\u001b[1;32m 356\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mKeyError\u001b[39;00m:\n\u001b[1;32m 357\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m callable(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdefault):\n\u001b[0;32m--> 358\u001b[0m val_str \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mself\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdefault\u001b[49m\u001b[43m(\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 359\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 360\u001b[0m val_str \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdefault\n", "File \u001b[0;32m/opt/hostedtoolcache/Python/3.8.12/x64/lib/python3.8/site-packages/theano/link/c/cmodule.py:2621\u001b[0m, in \u001b[0;36mdefault_blas_ldflags\u001b[0;34m()\u001b[0m\n\u001b[1;32m 2617\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 2618\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mhasattr\u001b[39m(numpy\u001b[38;5;241m.\u001b[39mdistutils, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m__config__\u001b[39m\u001b[38;5;124m\"\u001b[39m) \u001b[38;5;129;01mand\u001b[39;00m numpy\u001b[38;5;241m.\u001b[39mdistutils\u001b[38;5;241m.\u001b[39m__config__:\n\u001b[1;32m 2619\u001b[0m \u001b[38;5;66;03m# If the old private interface is available use it as it\u001b[39;00m\n\u001b[1;32m 2620\u001b[0m \u001b[38;5;66;03m# don't print information to the user.\u001b[39;00m\n\u001b[0;32m-> 2621\u001b[0m blas_info \u001b[38;5;241m=\u001b[39m \u001b[43mnumpy\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mdistutils\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m__config__\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mblas_opt_info\u001b[49m\n\u001b[1;32m 2622\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 2623\u001b[0m \u001b[38;5;66;03m# We do this import only here, as in some setup, if we\u001b[39;00m\n\u001b[1;32m 2624\u001b[0m \u001b[38;5;66;03m# just import theano and exit, with the import at global\u001b[39;00m\n\u001b[0;32m (...)\u001b[0m\n\u001b[1;32m 2630\u001b[0m \u001b[38;5;66;03m# This happen with Python 2.7.3 |EPD 7.3-1 and numpy 1.8.1\u001b[39;00m\n\u001b[1;32m 2631\u001b[0m \u001b[38;5;66;03m# isort: off\u001b[39;00m\n\u001b[1;32m 2632\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mnumpy\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mdistutils\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01msystem_info\u001b[39;00m \u001b[38;5;66;03m# noqa\u001b[39;00m\n", "\u001b[0;31mAttributeError\u001b[0m: module 'numpy.distutils.__config__' has no attribute 'blas_opt_info'" ] } ], "source": [ "import matplotlib.pyplot as plt\n", "import scipy.stats as stats\n", "import numpy as np\n", "import pandas as pd\n", "import seaborn as sns\n", "import pymc3 as pm\n", "import arviz as az\n", "\n", "az.style.use('arviz-darkgrid')" ] }, { "cell_type": "code", "execution_count": null, "id": "f3d45680", "metadata": {}, "outputs": [], "source": [ "np.random.seed(42)\n", "# 曲线1:10个预测变量,10个高斯采样的结果变量\n", "x = np.linspace(0, 1, 10)\n", "y = np.random.normal(0, 1, len(x))\n", "plt.plot(x, y, 'o-', label='the first one')\n", "\n", "# 曲线2:相同的10个预测变量,10个采用了不同均值高斯采样的结果变量\n", "y = np.zeros_like(x)\n", "for i in range(len(x)):\n", " y[i] = np.random.normal(y[i-1], 1)\n", "plt.plot(x, y, 'o-', label='the second one')\n", "plt.legend()" ] }, { "cell_type": "markdown", "id": "ea767118", "metadata": {}, "source": [ "图 7.1\n", "\n", "\n", "图 7.1 显示,使用高斯分布的样本对高斯函数进行编码并不复杂,但类似生成图 7.1 的这种函数表达能力有限且不够灵活。例如:有时我们期望函数具有某种结构或模式,此时第一种函数表达方式就无法编码数据点之间的关系。事实上,实验是从同一个高斯分布中获得的 10 个独立样本,因此数据点之间是完全独立的。对于第二个函数,实验引入了一些简单的依赖关系,将点 $y_{i+1}$ 处高斯分布的平均值设置为了 $y_i$ 的采样值,进而使数据点之间有了某种依赖关系。这为编码具有结构或模式的函数提供了基本思路思路。在本章后续内容中,将介绍一种更通用的方法来捕获不同数据点之间的依赖关系,而不仅限于相邻点之间。\n", "\n", "此处使用高斯分布的原因主要有两个:一是高斯分布具备形成任何形状函数的灵活性,因为每个数据点处都有各自均值和方差的高斯分布;二是从数学角度看,高斯分布具有一些非常好的特性。\n", "\n", "### 7.2.2 多维高斯与函数\n", "\n", "在图 7.1 中,使用一维高斯的 $n$ 个独立样本表示了第一个函数,然后尝试用 $n$ 个不同高斯分布(每个分布一个样本)表示了第二个函数。其实还可以换一种方式思考:既然每个数据点均被视为来自特定高斯分布的一个采样,是否可以将连续 $n$ 个数据点视为来自于 $n$ 维高斯分布的一次采样呢? 更进一步,如果样本数量无限大,是否可以用一个无限维高斯分布来对其建模呢?\n", "\n", "答案是肯定的。 在每个数据点均为独立的高斯分布样本假设下,可以使用 $n$ 维高斯分布来对连续 $n$ 个数据点进行建模,而此样本则可视为 $n$ 维高斯分布的一个向量形式(长度为 $n$ )的抽样。如果预测变量 $x$ 代表时间,则该模型就表示了一个基于高斯的随机过程,即高斯过程。\n", "\n", "通过上述分析,大家会发现高斯过程本质上是一个无限维高斯分布(虽然实践种中通常是有限维的),其每一维的边缘概率密度都是高斯分布,也代表着每个数据点都是来自某个一维高斯分布的抽样。\n", "\n", "在 PyMC3 中,通过使用 `np.random.multivariate_normal(np.zeros_like(x), np.eye(len(x)))` 替换 `np.random.normal(0, 1, len(x))` ,即可生成与图 7.1 相同的图形,也证明了上述分析。\n", "\n", "**将模型视为多维高斯分布有一个比较大的优势:可以利用协方差矩阵来编码数据点之间的相关性信息,进而打破观测独立性假设,为数据点间的依赖关系建模**。例如,PyMC3 允许协方差矩阵为单位矩阵 `np.eye(len(X))`,此时表示图中 10 个数据点处的随机变量方差为 1,而协方差为 0(因为观测独立性假设)。如果用其他正数替换单位矩阵中的零,则反映了另外一个完全不同的故事。\n", "\n", "该也意味着,要以概率方式对函数建模,需要打破观测独立性假设,通过选择合适的协方差矩阵,来构造一个能够反映函数特征的多维高斯分布(或高斯过程)。\n", "\n", "### 7.2.3 协方差矩阵与核函数\n", "\n", "根据上文分析,高斯过程与其他无限维高斯分布稍有不同,那就是高斯过程的维度间存在某种依赖关系,我们称之为相关性。从数学上,反映不同变量之间相关性的主要工具是协方差矩阵。设想一下,如果通过某种方式,结合样本数据,量化了高斯过程中相邻维度间的特殊依赖关系,是否就完成了对高斯过程的建模?\n", "\n", "在实际工作中,协方差矩阵是用`核函数`来指定的。也就是说,将高斯过程的建模问题转变为了核函数的建模问题。出于讨论目的,此处简化定义核函数为一个对称函数,它接受两个输入,当两个输入相同时返回 1 否则返回一个 $[0,1)$ 间的正值。如果该条件满足,则可将核函数的输出解释为两个输入之间的相似性度量,并用于构造协方差矩阵。\n", "\n", "在众多可用核函数中,指数族二次核是比较常用的一种:\n", "\n", "$$\n", "K\\left(x, x^{\\prime}\\right)=\\exp \\left(-\\frac{\\left\\|x-x^{\\prime}\\right\\|^{2}}{2 \\ell^{2}}\\right) \\tag{式7.3} \n", "$$\n", "\n", "此处, $x,x'$是随机变量的值, $\\left\\|x-x^{\\prime}\\right\\|^{2}$ 为平方欧氏距离:\n", "\n", "$$\n", "\\left\\|x-x^{\\prime}\\right\\|^{2}=\\left(x_{1}-x_{1}^{\\prime}\\right)^{2}+\\left(x_{2}-x_{2}^{\\prime}\\right)^{2}+\\cdots+\\left(x_{n}-x_{n}^{\\prime}\\right)^{2} \\tag{式7.4} \n", "$$\n", "\n", "乍一看可能不明显,但指数族二次核具有与高斯分布类似的公式(见表达式 1.3)。因此,有人将此核称为高斯核。 $\\ell$ 为长度尺度(或带宽、方差),用于控制核的宽度。\n", "\n", "为更好地理解核的作用,定义一个 Python 函数来计算指数族二次核:" ] }, { "cell_type": "code", "execution_count": null, "id": "8527f550", "metadata": {}, "outputs": [], "source": [ "def exp_quad_kernel(x, knots, ℓ=1):\n", " \"\"\"exponentiated quadratic kernel\"\"\"\n", " return np.array([np.exp(-(x-k)**2 / (2*ℓ**2)) for k in knots])" ] }, { "cell_type": "markdown", "id": "d115bbfc", "metadata": {}, "source": [ "以下代码和图 7.2 旨在展示了一个 $4 \\times 4$ 协方差矩阵。此处选择的输入相当简单,由值 [-1,0,1,2] 组成。" ] }, { "cell_type": "code", "execution_count": null, "id": "7f765f49", "metadata": {}, "outputs": [], "source": [ "data = np.array([-1, 0, 1, 2])\n", "cov = exp_quad_kernel(data, data, 1)\n", "_, ax = plt.subplots(1, 2, figsize=(12, 5))\n", "ax = np.ravel(ax)\n", "ax[0].plot(data, np.zeros_like(data), 'ko')\n", "ax[0].set_yticks([])\n", "for idx, i in enumerate(data):\n", " ax[0].text(i, 0+0.005, idx)\n", "ax[0].set_xticks(data)\n", "ax[0].set_xticklabels(np.round(data, 2))\n", "#ax[0].set_xticklabels(np.round(data, 2), rotation=70)\n", "ax[1].grid(False)\n", "im = ax[1].imshow(cov)\n", "colors = ['w', 'k']\n", "for i in range(len(cov)):\n", " for j in range(len(cov)):\n", " ax[1].text(j, i, round(cov[i, j], 2),\n", " color=colors[int(im.norm(cov[i, j]) > 0.5)],\n", " ha='center', va='center', fontdict={'size': 16})\n", "ax[1].set_xticks(range(len(data)))\n", "ax[1].set_yticks(range(len(data)))\n", "ax[1].xaxis.tick_top()" ] }, { "cell_type": "markdown", "id": "8ef7f8e2", "metadata": {}, "source": [ "
图 7.2
\n", "\n", "图 7.2 左侧显示了输入,$x$ 轴上的值表示每个数据点的值,文本注释显示数据点的顺序(从零开始)。右侧有一个热图,表示按照指数族二次核获得的协方差矩阵。颜色越浅协方差越大。热图是对称的,对角线取值较大。协方差矩阵中每个元素的值与两点间距离成反比,而对角线为每个数据点自相关的值,所以均为 1 。\n", "\n", "```{tip}\n", "该核将数据点沿 $x$ 轴的距离转换为预期函数值的协方差值。因此,$x$ 轴上的两个点越近,预计其在 $y$ 轴上的值就越相似。\n", "```\n", "\n", "综上所述,可以使用具有给定协方差的多维高斯分布对函数建模,而使用核函数对协方差建模,如下例中,使用 `exp_quad_kernel` 函数定义了一个多维高斯分布的协方差矩阵,然后使用该分布中的样本来表示函数:" ] }, { "cell_type": "code", "execution_count": null, "id": "fd443a5f", "metadata": {}, "outputs": [], "source": [ "np.random.seed(24)\n", "test_points = np.linspace(0, 10, 200)\n", "fig, ax = plt.subplots(2, 2, figsize=(12, 6), sharex=True,\n", " sharey=True, constrained_layout=True)\n", "ax = np.ravel(ax)\n", "# 四种核宽度\n", "for idx, ℓ in enumerate((0.2, 1, 2, 10)):\n", " # 四种核函数生成的协方差矩阵\n", " cov = exp_quad_kernel(test_points, test_points, ℓ)\n", " \n", " # 基于协方差矩阵从多维高斯分布中抽样\n", " ax[idx].plot(test_points, stats.multivariate_normal.rvs(cov=cov,size=2).T)\n", " ax[idx].set_title(f'ℓ ={ℓ}')\n", "\n", "fig.text(0.51, -0.03, 'x', fontsize=16)\n", "fig.text(-0.03, 0.5, 'f(x)', fontsize=16)" ] }, { "cell_type": "markdown", "id": "20214335", "metadata": {}, "source": [ "
图7.3
\n", "\n", "如图 7.3 所示,高斯核蕴含了多个函数,通过超参数 $\\ell$ 控制函数的平滑度。$\\ell$ 的值越大,函数越平滑。\n", "\n", "### 7.2.4 高斯过程\n", "\n", "了解对函数建模的原理后,就可以进一步理解什么是高斯过程以及其应用了。从维基百科摘取的对高斯过程的一个比较正式的定义如下:\n", "\n", "```{note}\n", "高斯过程是按时间或空间索引的随机变量集合,其每一个有限集合都服从多维高斯分布,亦即其每一个有限子集的线性组合服从正态分布。\n", "```\n", "\n", "理解高斯过程的诀窍是:**必须认识到高斯过程的概念只不过是一个数学定义,在实践中并不需要处理无限数学对象,而只需要在有数据的地方使用高斯过程**。即将数据集中的所有点视为来自某个多维高斯分布中某一维的抽样,从而将无限维高斯过程压缩成一个有限维高斯分布。从数学上讲,这种压缩是由无限不可观测维度上的边缘化造成的,并且在理论上得到了证明。该方法还保证将始终能够得到一个多维高斯分布。因此,可以将图 7.3 解释为来自高斯过程的真实样本!\n", "\n", "注意:此处我们将多维高斯分布的均值设置成了零(即零均值假设),并通过指数族二次核构建了协方差矩阵,进而完成了对函数的建模。在使用高斯过程时,做出这种零均值假设是较为常见的做法。\n", "\n", "```{note}\n", "高斯过程对于构建贝叶斯非参数模型非常有用,因为可以将它们用作函数的先验分布。\n", "```\n", "\n", "## 7.3 高斯过程回归\n", "\n", "假设可以将 $y$ 建模为一个关于 $x$ 的函数 $f(x)$ 和加性噪声的高斯分布:\n", "\n", "$$\n", "y \\sim \\mathcal{N}(\\mu=f(x), \\sigma=\\epsilon) \\tag{式7.5} \n", "$$\n", "\n", "此处 $\\epsilon \\sim \\mathcal{N}\\left(0, \\sigma_{\\epsilon}\\right)$\n", "\n", "这类似于第 3 章“线性回归模型”中做出的假设。此处的主要区别在于,我们将在 $f$ 上设置先验以体现相邻的函数值之间的某些依赖关系,而高斯过程正适合做这个工作:\n", "\n", "$$\n", "f(x) \\sim \\mathcal{GP}(\\mu_x, K(x, x')) \\tag{式7.6}\n", "$$\n", "\n", "$\\mathcal{GP}$ 表示高斯过程分布, $\\mu_x$ 为均值函数, $K(x,x')$ 为核函数或协方差函数。注意我们使用 `函数` 这个词汇,表示在数学上均值和协方差都是无限的对象,尽管实践中总是作为有限对象来处理的。\n", "\n", "如果先验是高斯过程分布,似然是正态分布,那么根据共轭先验的知识,后验分布也应当是一个高斯过程分布,并且可以解析地计算如下公式:\n", "\n", "$$\n", "p\\left(f\\left(X_{*}\\right) \\mid X_{*}, X, y\\right) \\sim \\mathcal{N}(\\mu, \\Sigma) \\tag{式7.7} \n", "$$\n", "\n", "$$\n", "\\mu=K_{*}^{T} K^{-1} y \\tag{式7.8} \n", "$$\n", "\n", "$$\n", "\\Sigma =K_{* *}-K_{*}^{T} K^{-1} K_{*} \n", "$$\n", "\n", "此处:\n", "\n", "$$\n", "\\begin{align*}\n", "&\\text { - } K=K(X, X) \\\\\n", "&\\text { - } K_{*}=K\\left(X_{*}, X\\right) \\\\\n", "&\\text { - } K_{* *}=K\\left(X_{*}, X_{*}\\right)\n", "\\end{align*}\n", "$$\n", "\n", "$X$ 是观察到的数据点,$X_*$ 表示测试点;也就是我们希望推断得出新点的函数值。\n", "\n", "像往常一样,PyMC3 会自动完成整个高斯过程的推断过程。让我们继续创建一些数据,然后构建一个 PyMC3 模型:" ] }, { "cell_type": "code", "execution_count": null, "id": "aafe9943", "metadata": {}, "outputs": [], "source": [ "np.random.seed(42)\n", "x = np.random.uniform(0, 10, size=15)\n", "y = np.random.normal(np.sin(x), 0.1)\n", "plt.plot(x, y, 'o')\n", "true_x = np.linspace(0, 10, 200)\n", "plt.plot(true_x, np.sin(true_x), 'k--')\n", "plt.xlabel('x')\n", "plt.ylabel('f(x)', rotation=0)" ] }, { "cell_type": "markdown", "id": "d9745313", "metadata": {}, "source": [ "
图7.4
\n", "\n", "在图 7.4 中,我们看到真实的未知函数是一条黑色的虚线,而点则表示了未知函数的含噪声样本。\n", "\n", "请注意,为了将公式 7.7 和 7.8 编码到 PyMC3 模型中,需要设置指数族二次核的 $\\epsilon$ 参数、核宽 $\\ell$ 和正态似然方差。\n", "\n", "高斯过程在 PyMC3 中实现为一系列 Python 类,与以前的模型略有不同;然而,代码仍然非常 `PyMC3 风格`。我在以下代码中添加了一些注释,以指导您完成使用 PyMC3 定义高斯过程的关键步骤:" ] }, { "cell_type": "code", "execution_count": null, "id": "59393caa", "metadata": {}, "outputs": [], "source": [ "# A one dimensional column vector of inputs.\n", "X = x[:, None]\n", "\n", "with pm.Model() as model_reg:\n", " # hyperprior for lengthscale kernel parameter\n", " ℓ = pm.Gamma('ℓ', 2, 0.5)\n", " # instanciate a covariance function\n", " cov = pm.gp.cov.ExpQuad(1, ls=ℓ)\n", " # instanciate a GP prior\n", " gp = pm.gp.Marginal(cov_func=cov)\n", " # prior\n", " ϵ = pm.HalfNormal('ϵ', 25)\n", " # likelihood\n", " y_pred = gp.marginal_likelihood('y_pred', X=X, y=y, noise=ϵ)\n", " trace_reg = pm.sample(2000)" ] }, { "cell_type": "markdown", "id": "278da100", "metadata": {}, "source": [ "这里使用的并非(式 7.7 )中预期的正态似然,而是采用了 `gp.marginal_likelihood` 函数。在第一章 “概率思维” 的(式 1.1)和第五章 “模型比较” 的(式 5.13) 中,边缘似然是似然和先验乘积的积分:\n", "\n", "$$\n", "p(y \\mid X, \\theta) \\sim \\int p(y \\mid f, X, \\theta) p(f \\mid X, \\theta) df \\tag{式7.9}\n", "$$\n", "\n", "与往常一样,$\\theta$ 代表所有未知参数,$X$ 是预测变量,$y$ 是结果变量。注意,我们正在对函数 $f$ 的值进行边缘化。对于高斯过程先验和正态似然,可以解析地得到边缘化结果。\n", "\n", "根据 PyMC3 的核心开发者、GP 模块的主要贡献者 Bill Engels 的说法,尽量避免对核宽度参数 $\\ell$ 设置零先验效果会更好。$\\ell$ 非常有用的默认设置是 `pm.Gamma(2,0.5)`。你可以从 [Stan team](https://github.com/stan-dev/stan/wiki/Prior-ChoiceRecommendations) 阅读更多关于默认先验的建议。" ] }, { "cell_type": "code", "execution_count": null, "id": "d63e5f54", "metadata": {}, "outputs": [], "source": [ "az.plot_trace(trace_reg)" ] }, { "cell_type": "markdown", "id": "6b8e0ae8", "metadata": {}, "source": [ "
图 7.5
\n", "\n", "现在已经找到 $\\ell$ 和 $\\epsilon$ 的值,我们可能想要从高斯过程分布的后验中获取样本;即已与数据拟合的函数的样本。通过使用 `gp.conditional` 条件函数计算新输入位置的条件分布来实现这一点:" ] }, { "cell_type": "code", "execution_count": null, "id": "18ba9f36", "metadata": {}, "outputs": [], "source": [ "X_new = np.linspace(np.floor(x.min()), np.ceil(x.max()), 100)[:,None]\n", "with model_reg:\n", " f_pred = gp.conditional('f_pred', X_new)" ] }, { "cell_type": "markdown", "id": "4ea8892f", "metadata": {}, "source": [ "结果得到了一个新的 PyMC3 随机变量 f_pred,可以使用它从后验预测分布中获取样本(基于 X_new 值计算):" ] }, { "cell_type": "code", "execution_count": null, "id": "dc304702", "metadata": {}, "outputs": [], "source": [ "with model_reg:\n", " pred_samples = pm.sample_posterior_predictive(trace_reg, vars=[f_pred],samples=82)" ] }, { "cell_type": "markdown", "id": "149ab1a1", "metadata": {}, "source": [ "现在可以在原始数据上绘制拟合函数图,以直观检查其与数据的拟合程度以及预测中的不确定性:" ] }, { "cell_type": "code", "execution_count": null, "id": "c4019ad7", "metadata": {}, "outputs": [], "source": [ "_, ax = plt.subplots(figsize=(12,5))\n", "ax.plot(X_new, pred_samples['f_pred'].T, 'C1-', alpha=0.3)\n", "ax.plot(X, y, 'ko')\n", "ax.set_xlabel('X')" ] }, { "cell_type": "markdown", "id": "fc030ea2", "metadata": {}, "source": [ "
图 7.6
\n", "\n", "或者,可以使用 `pm.gp.util.plot_gp_dist` 函数来获得一些图形。每个图代表一个百分位数,范围从 51(浅色)到 99(深色):" ] }, { "cell_type": "code", "execution_count": null, "id": "a211411c", "metadata": {}, "outputs": [], "source": [ "_, ax = plt.subplots(figsize=(12,5))\n", "pm.gp.util.plot_gp_dist(ax, pred_samples['f_pred'], X_new,\n", "palette='viridis', plot_samples=False);\n", "ax.plot(X, y, 'ko')\n", "ax.set_xlabel('x')\n", "ax.set_ylabel('f(x)', rotation=0, labelpad=15)" ] }, { "cell_type": "markdown", "id": "1b386550", "metadata": {}, "source": [ "
图 7.7
\n", "\n", "另一种选择是计算在参数空间中给定点估计的条件分布的平均向量和标准差。在下面示例中,使用 $\\ell$ 和 $\\epsilon$ 的平均值(在迹的样本上)。我们可以使用 `gp.predict` 函数计算平均值和方差。之所以能做到这一点,是因为 PyMC3 已经解析地计算了后验结果:" ] }, { "cell_type": "code", "execution_count": null, "id": "70dc9dec", "metadata": {}, "outputs": [], "source": [ "_, ax = plt.subplots(figsize=(12,5))\n", "point = {'ℓ': trace_reg['ℓ'].mean(), 'ϵ': trace_reg['ϵ'].mean()}\n", "mu, var = gp.predict(X_new, point=point, diag=True)\n", "sd = var**0.5\n", "ax.plot(X_new, mu, 'C1')\n", "ax.fill_between(X_new.flatten(),\n", " mu - sd, mu + sd,\n", " color=\"C1\",\n", " alpha=0.3)\n", "ax.fill_between(X_new.flatten(),\n", " mu - 2*sd, mu + 2*sd,\n", " color=\"C1\",\n", " alpha=0.3)\n", "ax.plot(X, y, 'ko')\n", "ax.set_xlabel('X')" ] }, { "cell_type": "markdown", "id": "9859d742", "metadata": {}, "source": [ "
图 7.8
\n", "\n", "正如第四章 “广义线性模型” 中看到的,可以使用具有非高斯似然的线性模型和适当的逆连接函数来扩展线性模型的范围。我们可以为高斯过程做同样的事情。例如,使用具有指数逆连接函数的泊松似然。如果这样做,则后验不再是可解析的,此时需要用 MCMC 或 变分等数值推断方法来逼近后验。\n", "\n", "## 7.4 空间自相关回归\n", "\n", "下面的例子取自 `Richard McElreath` 的 《统计反思( Statistical Rethinking )》 一书。我强烈推荐你读这本书,因为你会发现很多像本例一样的好例子以及很好的解释。唯一需要注意的是,书中示例是 `R/stan` 格式的,但不要担心;在 https://github.com/pymc-devs/Resources 中可以找到其 `Python/PyMC3` 版本。\n", "\n", "现在回到案例,有 10 个不同的岛屿社会;每一个都有其使用的工具数量。一些理论预测,较大人口需要开发和维护更多工具;另一个重要因素是人群间的接触率。\n", "\n", "当将工具数量作为结果变量时,可以使用泊松回归,并将人口数量作为预测变量。事实上,可以使用人口的对数,因为根据理论真正重要的是人口比例,而不是绝对数量。将接触率包括在模型中的一种方法是收集有关这些岛屿社会在历史上的接触频率,可以考虑创建一个定类变量,如低/高接触率(见下表中的 Contact 列)。另一种方式是使用不同岛屿社会间的距离来代表接触率,假设距离近的社会接触会更频繁。\n", "\n", "我们通过读取本书附带的 `island_dist.csv` 文件来访问以千公里为单位表示的距离矩阵:" ] }, { "cell_type": "code", "execution_count": null, "id": "191bdbdd", "metadata": {}, "outputs": [], "source": [ "islands_dist = pd.read_csv('../data/islands_dist.csv',sep=',', index_col=0)\n", "islands_dist.round(1)" ] }, { "cell_type": "markdown", "id": "e5f723bc", "metadata": {}, "source": [ "
\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505204845_7c.webp)\n", "\n", "
\n", "\n", "主对角线上填满了零,因为岛屿社会自身距离为零。矩阵是对称的,表示从 $A$ 点到 $B$ 点的距离。工具数量和人口规模存储在另一个文件 `islands.csv` 中,该文件也随书一起分发:" ] }, { "cell_type": "code", "execution_count": null, "id": "699512a4", "metadata": {}, "outputs": [], "source": [ "islands = pd.read_csv('../data/islands.csv', sep=',')\n", "islands.head().round(1)" ] }, { "cell_type": "markdown", "id": "b29065fe", "metadata": {}, "source": [ "
\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505204920_fd.webp)\n", "\n", "
\n", "\n", "表中只列出和使用了 culture, total_tools, lat, lon2 和 logpop 列:" ] }, { "cell_type": "code", "execution_count": null, "id": "0987ceed", "metadata": {}, "outputs": [], "source": [ "islands_dist_sqr = islands_dist.values**2\n", "culture_labels = islands.culture.values\n", "index = islands.index.values\n", "log_pop = islands.logpop\n", "total_tools = islands.total_tools\n", "x_data = [islands.lat.values[:, None], islands.lon.values[:, None]]" ] }, { "cell_type": "markdown", "id": "424c3999", "metadata": {}, "source": [ "我们要构建的模型是:\n", "\n", "$$\n", "f \\sim \\mathcal{G P}\\left([0, \\cdots, 0], K\\left(x, x^{\\prime}\\right)\\right) \\tag{式7.10} \n", "$$\n", "\n", "$$\n", "\\mu \\sim \\exp (\\alpha+\\beta x+f) \\tag{式7.11} \n", "$$\n", "\n", "$$\n", "y \\sim \\operatorname{Poisson}(\\mu) \\tag{式7.12}\n", "$$\n", "\n", "此处省略了 $\\alpha$ 和 $\\beta$ 的前缀,以及核的超先验。 $x$ 是对数人口, $y$ 是工具总数。\n", "\n", "与第四章 “广义线性模型” 中的模型相比,本模型基本上是一个泊松回归模型,此外,线性模型中的 $f$ 项来自高斯过程。为了计算高斯过程的核,使用了距离矩阵 `islands_dist`。通过该方式,将距离矩阵有效地纳入了相似性度量。由此,将每个岛屿社会的工具数量建模为其地理相似性的函数,而不仅仅是人口规模的结果,而且岛屿社会间没有接触。\n", "\n", "此模型(包括之前的模型)类似于 PyMC3 中的以下代码:" ] }, { "cell_type": "code", "execution_count": null, "id": "8fb3615f", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_islands:\n", " η = pm.HalfCauchy('η', 1)\n", " ℓ = pm.HalfCauchy('ℓ', 1)\n", " cov = η * pm.gp.cov.ExpQuad(1, ls=ℓ)\n", " 高斯过程= pm.gp.Latent(cov_func=cov)\n", " f = gp.prior('f', X=islands_dist_sqr)\n", " α = pm.Normal('α', 0, 10)\n", " β = pm.Normal('β', 0, 1)\n", " μ = pm.math.exp(α + f[index] + β * log_pop)\n", " tt_pred = pm.Poisson('tt_pred', μ, observed=total_tools)\n", "\n", " trace_islands = pm.sample(1000, tune=1000)" ] }, { "cell_type": "markdown", "id": "5280adaf", "metadata": {}, "source": [ "为理解协方差函数的后验分布,可以根据后验分布绘制一些样本:" ] }, { "cell_type": "code", "execution_count": null, "id": "2e78d2df", "metadata": {}, "outputs": [], "source": [ "trace_η = trace_islands['η']\n", "trace_ℓ = trace_islands['ℓ']\n", "_, ax = plt.subplots(1, 1, figsize=(8, 5))\n", "xrange = np.linspace(0, islands_dist.values.max(), 100)\n", "ax.plot(xrange, np.median(trace_η) *\n", " np.exp(-np.median(trace_ℓ) * xrange**2), lw=3)\n", "ax.plot(xrange, (trace_η[::20][:, None] * np.exp(- trace_ℓ[::20][:, None] *\n", "xrange**2)).T,\n", " 'C0', alpha=.1)\n", "ax.set_ylim(0, 1)\n", "ax.set_xlabel('distance (thousand kilometers)')\n", "ax.set_ylabel('covariance')" ] }, { "cell_type": "markdown", "id": "466edab3", "metadata": {}, "source": [ "
图 7.9
\n", "\n", "图 7.9 中的粗线是成对岛屿社会间协方差作为距离函数的后验中位数。我们使用中位数是因为 $\\ell$ 和 $η$ 的分布非常不对称。可以看到,平均下来协方差没有那么高,在大约 2000 公里处降到了几乎为 0。细线代表不确定性,可以看到有很大的不确定性。\n", "\n", "您可能会发现,将 `model_islands` 及其计算出的后验与[参考资料](https:/​/​github.​com/​pymc-​devs/​resources)中的模型 `m_10_10` 进行比较会很有趣。您可能希望使用 `ArviZ` 函数,例如 `az.summary` 或 `az.plot_forest`。 模型 `m_10_10` 类似于 `model_islands`,但不包含高斯过程项。\n", "\n", "根据我们的模型,现在继续探索岛屿之间的社会联系有多强。为此,必须将协方差矩阵转换为相关矩阵:" ] }, { "cell_type": "code", "execution_count": null, "id": "b40413bb", "metadata": {}, "outputs": [], "source": [ "# compute posterior median covariance among societies\n", "Σ = np.median(trace_η) * (np.exp(-np.median(trace_ℓ) * islands_dist_sqr))\n", "# convert to correlation matrix\n", "Σ_post = np.diag(np.diag(Σ)**-0.5)\n", "ρ = Σ_post @ Σ @ Σ_post\n", "ρ = pd.DataFrame(ρ, index=islands_dist.columns,\n", "columns=islands_dist.columns)\n", "ρ.round(2)" ] }, { "cell_type": "markdown", "id": "36003e00", "metadata": {}, "source": [ "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505205246_9e.webp)\n", "\n", "比较明显的是,夏威夷非常独立,因为夏威夷离其他岛屿社会很远。此外,还可以看到 `Malekula(Ml)`、`Tikopia(Ti)` 和 `Santa Cruz(Sc)` 之间高度相关,因为这些社会非常接近,而且有类似数量的工具。\n", "\n", "现在使用纬度和经度信息来绘制 `岛屿-社会` 的相对位置:" ] }, { "cell_type": "code", "execution_count": null, "id": "4853f811", "metadata": {}, "outputs": [], "source": [ "# scale point size to logpop\n", "logpop = np.copy(log_pop)\n", "logpop /= logpop.max()\n", "psize = np.exp(logpop*5.5)\n", "log_pop_seq = np.linspace(6, 14, 100)\n", "lambda_post = np.exp(trace_islands['α'][:, None] +\n", " trace_islands['β'][:, None] * log_pop_seq)\n", "_, ax = plt.subplots(1, 2, figsize=(12, 6))\n", "ax[0].scatter(islands.lon2, islands.lat, psize, zorder=3)\n", "ax[1].scatter(islands.logpop, islands.total_tools, psize, zorder=3)\n", "for i, itext in enumerate(culture_labels):\n", " ax[0].text(islands.lon2[i]+1, islands.lat[i]+1, itext)\n", " ax[1].text(islands.logpop[i]+.1, islands.total_tools[i]-2.5, itext)\n", "ax[1].plot(log_pop_seq, np.median(lambda_post, axis=0), 'k--')\n", "az.plot_hpd(log_pop_seq, lambda_post, fill_kwargs={'alpha':0},\n", "plot_kwargs={'color':'k', 'ls':'--', 'alpha':1})\n", "for i in range(10):\n", " for j in np.arange(i+1, 10):\n", " ax[0].plot((islands.lon2[i], islands.lon2[j]),\n", " (islands.lat[i], islands.lat[j]), 'C1-',\n", " alpha=ρ.iloc[i, j]**2, lw=4)\n", " ax[1].plot((islands.logpop[i], islands.logpop[j]),\n", " (islands.total_tools[i], islands.total_tools[j]), 'C1-',\n", " alpha=ρ.iloc[i, j]**2, lw=4)\n", "ax[0].set_xlabel('longitude')\n", "ax[0].set_ylabel('latitude')\n", "ax[1].set_xlabel('log-population')\n", "ax[1].set_ylabel('total tools')\n", "ax[1].set_xlim(6.8, 12.8)\n", "ax[1].set_ylim(10, 73)" ] }, { "cell_type": "markdown", "id": "01054d64", "metadata": {}, "source": [ "
图7.10
\n", "\n", "图 7.10 的左侧显示了之前在相对地理位置上下文中计算的岛屿社会间的后验中值相关性线条。有些线条是不可见的,因为使用相关性来设置了线条的不透明度。在右侧,再次显示了后验中值相关性,但此次是根据对数人口与工具数量绘制的。虚线表示工具的中位数和 94%HPD 间隔作为对数填充的函数。两幅图中,圆点大小与每个岛屿社会的人口成正比。\n", "\n", "请注意 `Malekula`、`Tikopia` 和 `Santa Cruz` 之间的相关性如何描述这样一个事实,即它们拥有的工具数量相当少,接近或低于其人口的预期工具数量。类似的事情正发生在`Trobriands`和 `Manus`;它们地理位置相近,拥有的工具比预期人口规模要少。`Tonga` 为其人口提供的工具比预期要多得多,而且与 `Fiji` 的相关性较高。在某种程度上,该模型告诉我们,`Tonga`对 `Lua Fiji`有积极的影响,增加了工具总数,抵消了 `Malekula`、`Tikopia` 和 `Santa Cruz`等近邻对 `Fiji` 的影响。\n", "\n", "## 7.5 高斯过程分类\n", "\n", "高斯过程不限于回归,也可以用于分类。正如第 4 章 “广义线性模型” 中所述,通过使用 Logistic 逆连接函数的伯努利似然,将线性模型转化为适合分类的模型。对于 iris 数据集,本节将尝试重述第 4 章中的 model_0,不过这次使用高斯过程而不是线性模型。" ] }, { "cell_type": "code", "execution_count": null, "id": "bcc31147", "metadata": {}, "outputs": [], "source": [ "iris = pd.read_csv('../data/iris.csv')\n", "iris.head()" ] }, { "cell_type": "markdown", "id": "cc4217bc", "metadata": {}, "source": [ "
\n", "\n", "![image-20210505205534881](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505205541_6c.webp)\n", "\n", "
\n", "\n", "我们从最简单的分类问题开始,类别只有两类:山鸢尾(`setosa`)和变色鸢尾(`versicolor`);预测变量只有一个:萼片长度(`sepal_length`)。使用数字 `0` 和 `1` 对山鸢尾(`setosa`)和变色鸢尾(`versicolor`)进行编码:" ] }, { "cell_type": "code", "execution_count": null, "id": "25a2afc4", "metadata": {}, "outputs": [], "source": [ "df = iris.query(\"species == ('setosa', 'versicolor')\")\n", "y = pd.Categorical(df['species']).codes\n", "x_1 = df['sepal_length'].values\n", "X_1 = x_1[:, None]" ] }, { "cell_type": "markdown", "id": "61fa076b", "metadata": {}, "source": [ "对于此模型,我们不使用 `pm.gp.Marginal` 类实例化高斯过程先验,而是使用 `pm.gp.Latent` 类。前者仅限于高斯似然,而后者更通用些,可与任何似然一起使用(不过高斯过程先验与高斯似然结合通过共轭先验得到解析推断结果,具有更高效率):" ] }, { "cell_type": "code", "execution_count": null, "id": "02ac10bd", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_iris:\n", " ℓ = pm.Gamma('ℓ', 2, 0.5)\n", " cov = pm.gp.cov.ExpQuad(1, ℓ)\n", " 高斯过程= pm.gp.Latent(cov_func=cov)\n", " f = gp.prior(\"f\", X=X_1)\n", " # logistic inverse link function and Bernoulli likelihood\n", " y_ = pm.Bernoulli(\"y\", p=pm.math.sigmoid(f), observed=y)\n", "\n", " trace_iris = pm.sample(1000, chains=1,compute_convergence_checks=False)" ] }, { "cell_type": "markdown", "id": "d4cac387", "metadata": {}, "source": [ "现在找到了 $\\ell$ 的值 ,想要从高斯过程后验获取样本。与 `marginal_gp_model` 函数一样,可借助 `gp.conditional` 函数计算一组新输入位置上的条件分布,如以下代码所示:" ] }, { "cell_type": "code", "execution_count": null, "id": "76cbc148", "metadata": {}, "outputs": [], "source": [ "X_new = np.linspace(np.floor(x_1.min()), np.ceil(x_1.max()), 200)[:, None]\n", "with model_iris:\n", " f_pred = gp.conditional('f_pred', X_new)\n", " pred_samples = pm.sample_posterior_predictive(\n", " trace_iris, vars=[f_pred], samples=1000)" ] }, { "cell_type": "markdown", "id": "82bcbc32", "metadata": {}, "source": [ "为显示此模型结果,创建一个类似于图 4.4 的图。我们将使用以下函数直接从 `f_pred` 计算决策边界,而不是解析地获得决策边界:" ] }, { "cell_type": "code", "execution_count": null, "id": "1efb8129", "metadata": {}, "outputs": [], "source": [ "def find_midpoint(array1, array2, value):\n", " \"\"\"\n", " This should be a proper docstring :-)\n", " \"\"\"\n", " array1 = np.asarray(array1)\n", " idx0 = np.argsort(np.abs(array1 - value))[0]\n", " idx1 = idx0 - 1 if array1[idx0] > value else idx0 + 1\n", " if idx1 == len(array1):\n", " idx1 -= 1\n", " return (array2[idx0] + array2[idx1]) / 2" ] }, { "cell_type": "markdown", "id": "c98db5c7", "metadata": {}, "source": [ "以下代码与第 4 章 “广义线性模型” 中用于生成图 4.4 的代码非常相似:" ] }, { "cell_type": "code", "execution_count": null, "id": "0a98742b", "metadata": {}, "outputs": [], "source": [ "_, ax = plt.subplots(figsize=(10, 6))\n", "fp = logistic(pred_samples['f_pred'])\n", "fp_mean = np.mean(fp, 0)\n", "ax.plot(X_new[:, 0], fp_mean)\n", "# plot the data (with some jitter) and the true latent function\n", "ax.scatter(x_1, np.random.normal(y, 0.02),\n", " marker='.', color=[f'C{x}' for x in y])\n", "az.plot_hpd(X_new[:, 0], fp, color='C2')\n", "db = np.array([find_midpoint(f, X_new[:, 0], 0.5) for f in fp])\n", "db_mean = db.mean()\n", "db_hpd = az.hpd(db)\n", "ax.vlines(db_mean, 0, 1, color='k')\n", "ax.fill_betweenx([0, 1], db_hpd[0], db_hpd[1], color='k', alpha=0.5)\n", "ax.set_xlabel('sepal_length')\n", "ax.set_ylabel('θ', rotation=0)\n", "\n", "# plt.savefig('B11197_07_11.png')" ] }, { "cell_type": "markdown", "id": "0af0a64c", "metadata": {}, "source": [ "
图 7.11
\n", "\n", "图 7.11 与图 4.4 非常相似, `f_pred` 曲线看起来像是一条 `S` 型曲线,除尾部在较低的 $x_1$ 值时上升,在较高的 $x_1$ 值下降。这是没有数据时预测函数向前移动的结果。如果只关心决策边界,这应该不是真正的问题,但如果想要为不同萼片长度值建立属于山鸢尾或变色鸢尾的概率模型,那么应该改进模型,并做一些事情来获得更好的尾部模型。实现此目标的方法是给高斯过程增加更多的构造。获得更好的高斯过程模型的一般方法是组合协方差函数,以便更好地捕捉函数细节。\n", "\n", "以下模型 (`model_iris2`) 与 `model_iris` 相同,不同之处在于协方差矩阵,来自三个核的组合:" ] }, { "cell_type": "code", "execution_count": null, "id": "b2f1af7d", "metadata": {}, "outputs": [], "source": [ "cov = K_{ExpQuad} + K_{Linear} + K_{whitenoise}(1E-5)" ] }, { "cell_type": "markdown", "id": "957efcb8", "metadata": {}, "source": [ "通过添加线性核修复了尾部问题,如图 7.12 所示。白噪声核只是一个稳定协方差矩阵计算的技巧。对高斯过程的核进行限制,确保得到的协方差矩阵为正定的。但数字误差可能会导致违反此条件,其表现是在计算拟合函数的后验预测样本时会出现无效值(NaN)。通过添加一点噪声来稳定计算能够减轻此问题。其实 PyMC3 已经在幕后做到了这一点,但有时需要更多噪声,如以下代码所示:" ] }, { "cell_type": "code", "execution_count": null, "id": "416f2767", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_iris2:\n", " ℓ = pm.Gamma('ℓ', 2, 0.5)\n", " c = pm.Normal('c', x_1.min())\n", " τ = pm.HalfNormal('τ', 5)\n", " cov = (pm.gp.cov.ExpQuad(1, ℓ) +\n", " τ * pm.gp.cov.Linear(1, c) +\n", " pm.gp.cov.WhiteNoise(1E-5))\n", " 高斯过程= pm.gp.Latent(cov_func=cov)\n", " f = gp.prior(\"f\", X=X_1)\n", " # logistic inverse link function and Bernoulli likelihood\n", " y_ = pm.Bernoulli(\"y\", p=pm.math.sigmoid(f), observed=y)\n", " trace_iris2 = pm.sample(1000, chains=1,compute_convergence_checks=False)" ] }, { "cell_type": "markdown", "id": "45fb2c40", "metadata": {}, "source": [ "现在为先前生成的 `X_new` 值生成 `model_iris2` 的后验预测样本:" ] }, { "cell_type": "code", "execution_count": null, "id": "c53b4865", "metadata": {}, "outputs": [], "source": [ "with model_iris2:\n", " f_pred = gp.conditional('f_pred', X_new)\n", " pred_samples = pm.sample_posterior_predictive(trace_iris2,\n", " vars=[f_pred],\n", " samples=1000)\n", "_, ax = plt.subplots(figsize=(10,6))\n", "fp = logistic(pred_samples['f_pred'])\n", "fp_mean = np.mean(fp, 0)\n", "ax.scatter(x_1, np.random.normal(y, 0.02), marker='.',\n", " color=[f'C{ci}' for ci in y])\n", "db = np.array([find_midpoint(f, X_new[:,0], 0.5) for f in fp])\n", "db_mean = db.mean()\n", "db_hpd = az.hpd(db)\n", "ax.vlines(db_mean, 0, 1, color='k')\n", "ax.fill_betweenx([0, 1], db_hpd[0], db_hpd[1], color='k', alpha=0.5)\n", "ax.plot(X_new[:,0], fp_mean, 'C2', lw=3)\n", "az.plot_hpd(X_new[:,0], fp, color='C2')\n", "ax.set_xlabel('sepal_length')\n", "ax.set_ylabel('θ', rotation=0)" ] }, { "cell_type": "markdown", "id": "5d24a932", "metadata": {}, "source": [ "
图 7.12
\n", "\n", "图 7.12 看起来更类似于图 4.4,而不是图 7.11。此示例有两个主要目的:\n", "\n", "- 展示如何轻松组合核以获得更具表现力的模型\n", "- 展示了如何使用高斯过程实现 Logistic 回归\n", "\n", "尤其是第二点,Logistic 回归确实是高斯过程的特例,因为一元线性回归是高斯过程的特例。事实上,许多已知的模型都可以被视为高斯过程的特例,或者至少以某种方式与高斯过程联系在一起。你可以阅读凯文·墨菲 (Kevin Murphy) 的《 Machine Learning: A Probabilistic Perspective 》中的第 15 章。\n", "\n", "实践中,使用高斯过程对只能用 Logistic 回归来解决的问题进行建模没有太大意义。相反,我们希望使用高斯过程来建模更复杂的数据,而这些数据使用灵活性较低的模型很难捕获。例如,假设想要将患病概率建模为年龄的函数。事实证明,非常年轻和非常年长的人比中年人有更高风险。数据集 `space_flu.csv` 是受前面描述启发的假数据集。" ] }, { "cell_type": "code", "execution_count": null, "id": "660825e9", "metadata": {}, "outputs": [], "source": [ "df_sf = pd.read_csv('../data/space_flu.csv')\n", "age = df_sf.age.values[:, None]\n", "space_flu = df_sf.space_flu\n", "ax = df_sf.plot.scatter('age', 'space_flu', figsize=(8, 5))\n", "ax.set_yticks([0, 1])\n", "ax.set_yticklabels(['healthy', 'sick'])" ] }, { "cell_type": "markdown", "id": "c55ae3f8", "metadata": {}, "source": [ "
\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505210018_ba.webp)\n", "\n", "图7.13\n", "
\n", "\n", "以下模型与 `model_iris` 基本相同:" ] }, { "cell_type": "code", "execution_count": null, "id": "8168e89c", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_space_flu:\n", " ℓ = pm.HalfCauchy('ℓ', 1)\n", " cov = pm.gp.cov.ExpQuad(1, ℓ) + pm.gp.cov.WhiteNoise(1E-5)\n", " 高斯过程= pm.gp.Latent(cov_func=cov)\n", " f = gp.prior('f', X=age)\n", " y_ = pm.Bernoulli('y', p=pm.math.sigmoid(f), observed=space_flu)\n", "\n", " trace_space_flu = pm.sample(1000, chains=1, compute_convergence_checks=False)" ] }, { "cell_type": "markdown", "id": "0de4e2ff", "metadata": {}, "source": [ "现在,为 `model_space_flu` 生成后验预测样本,然后绘制结果:" ] }, { "cell_type": "code", "execution_count": null, "id": "efa7215e", "metadata": {}, "outputs": [], "source": [ "X_new = np.linspace(0, 80, 200)[:, None]\n", "with model_space_flu:\n", " f_pred = gp.conditional('f_pred', X_new)\n", " pred_samples = pm.sample_posterior_predictive(trace_space_flu,\n", " vars=[f_pred],\n", " samples=1000)\n", "_, ax = plt.subplots(figsize=(10, 6))\n", "fp = logistic(pred_samples['f_pred'])\n", "fp_mean = np.nanmean(fp, 0)\n", "ax.scatter(age, np.random.normal(space_flu, 0.02),\n", " marker='.', color=[f'C{ci}' for ci in space_flu])\n", "ax.plot(X_new[:, 0], fp_mean, 'C2', lw=3)\n", "az.plot_hpd(X_new[:, 0], fp, color='C2')\n", "ax.set_yticks([0, 1])\n", "ax.set_yticklabels(['healthy', 'sick'])\n", "ax.set_xlabel('age')" ] }, { "cell_type": "markdown", "id": "d67f9adc", "metadata": {}, "source": [ "
图 7.14
\n", "\n", "图 7.14 中,高斯过程能够很好地适应此数据集,即使数据要求函数比 logistic 函数更复杂。对于一元逻辑回归来说,很好地拟合该数据是不可能的,除非引入一些特殊的修改来帮助它。\n", "\n", "## 7.6 考克斯 (Cox) 过程 \n", "\n", "现在考虑计数类数据的建模问题。在此将看到两个示例;一个具有时域计数,另一个具有二维空间计数。为做到这点,使用泊松分布对似然建模(每个数据点来自于一个泊松分布的抽样),使用高斯过程对计数函数建模(所有数据点计数的集合来自一个高斯过程的抽样)。因泊松分布的计数被限制为正值,所以使用指数函数作为逆连接函数,类似第 4 章 “广义线性模型” 中的零膨胀泊松回归。\n", "\n", "在很多资料中,可变计数也以强度的形式出现,此类型问题被统称为强度估计问题,此类模型则被称为考克斯(Cox)模型。考克斯模型是泊松过程的一种,其计数本身就是一个随机过程。高斯过程是随机变量的集合,而且这些随机变量的每个有限集合都是多维高斯分布,泊松过程也有类似的特性,泊松过程也是随机变量集合,这些随机变量的每个有限集合都具有泊松分布。当泊松过程的计数本身是一个随机过程时,称之为考克斯(Cox)过程。\n", "\n", "### 7.6.1 煤矿灾害\n", "\n", "第一个例子是煤矿灾难。该案例包括英国从 1851 年到 1962 年的煤矿灾难记录。灾难数量被认为受到安全法规变化的影响。我们希望将灾害率建模为时间的函数,数据集只有一列,每个条目都对应灾难发生的时间。让我们加载数据并查看它的一些值:" ] }, { "cell_type": "code", "execution_count": null, "id": "e9972658", "metadata": {}, "outputs": [], "source": [ "coal_df = pd.read_csv('../data/coal.csv', header=None)\n", "coal_df.head()" ] }, { "cell_type": "markdown", "id": "61a47152", "metadata": {}, "source": [ "
\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505210251_00.webp)\n", "\n", "
\n", "\n", "我们用来拟合上表 `coal_df` 中数据的模型是:\n", "\n", "$$\n", "\\begin{align*}\n", "f(x) &\\sim \\mathcal{G P}\\left(\\mu_{x}, K\\left(x, x^{\\prime}\\right)\\right) \\tag{式7.13} \\\\\n", "y &\\sim \\operatorname{Poisson}(f(x)) \\tag{式7.14} \n", "\\end{align*}\n", "$$\n", "\n", "公式 $(7.13)$ 、$(7.14)$ 是一个泊松回归问题。你可能会想,如果只有灾难发生日期这一列,将如何执行回归?答案是将数据离散化,就像正在构建直方图一样。我们使用抽屉的中心值作为自变量 $x$ , 每个抽屉内的计数作为因变量 $y$:" ] }, { "cell_type": "code", "execution_count": null, "id": "1ffb3ef2", "metadata": {}, "outputs": [], "source": [ "# discretize data\n", "years = int(coal_df.max().values - coal_df.min().values)\n", "bins = years // 4\n", "hist, x_edges = np.histogram(coal_df, bins=bins)\n", "# compute the location of the centers of the discretized data\n", "x_centers = x_edges[:-1] + (x_edges[1] - x_edges[0]) / 2\n", "# arrange xdata into proper shape for GP\n", "x_data = x_centers[:, None]\n", "# express data as the rate number of disaster per year\n", "y_data = hist / 4" ] }, { "cell_type": "markdown", "id": "47fceea1", "metadata": {}, "source": [ "现在用 PyMC3 定义并求解该模型:" ] }, { "cell_type": "code", "execution_count": null, "id": "9f6a7f66", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_coal:\n", " ℓ = pm.HalfNormal('ℓ', x_data.std())\n", " cov = pm.gp.cov.ExpQuad(1, ls=ℓ) + pm.gp.cov.WhiteNoise(1E-5)\n", " 高斯过程= pm.gp.Latent(cov_func=cov)\n", " f = gp.prior('f', X=x_data)\n", " y_pred = pm.Poisson('y_pred', mu=pm.math.exp(f), observed=y_data)\n", " \n", " trace_coal = pm.sample(1000, chains=1)" ] }, { "cell_type": "markdown", "id": "cf14d143", "metadata": {}, "source": [ "绘制结果图:" ] }, { "cell_type": "code", "execution_count": null, "id": "2126cdc7", "metadata": {}, "outputs": [], "source": [ "_, ax = plt.subplots(figsize=(10, 6))\n", "f_trace = np.exp(trace_coal['f'])\n", "rate_median = np.median(f_trace, axis=0)\n", "ax.plot(x_centers, rate_median, 'w', lw=3)\n", "az.plot_hpd(x_centers, f_trace)\n", "az.plot_hpd(x_centers, f_trace, credible_interval=0.5,\n", " plot_kwargs={'alpha': 0})\n", "ax.plot(coal_df, np.zeros_like(coal_df)-0.5, 'k|')\n", "ax.set_xlabel('years')\n", "ax.set_ylabel('rate')" ] }, { "cell_type": "markdown", "id": "cda46645", "metadata": {}, "source": [ "
图 7.15
\n", "\n", "图 7.15 用白线显示了灾害计数的中位数与时间的关系。这些条带表示 50%HPD 区间(较深)和 94%HPD 区间(较浅)。在底部,用黑色记号笔标出了每一场灾难(这被称为地毯图)。如图所示,除最初的短暂上升外,事故计数随时间的推移而下降。PyMC3 的文档中包括煤矿灾难这个案例,但从不同角度建的模。建议您检查下该示例,因为其本身非常有用,而且将其与刚刚使用 `model_coal` 模型实现的方法进行比较也很有用。\n", "\n", "请注意,将数据装入了抽屉后,会得到一条平滑的曲线。从该意义上说,可将 `model_coal`看作是构建了一个直方图,并对其做了平滑。\n", "\n", "### 7.6.2 红杉数据集\n", "\n", "第二个案例是面向二维空间的红杉木数据。该数据集来自 `GPStuff` 包(一个用于 Matlab、Octave 和 R 的高斯过程包)。该数据集由给定区域上红杉树的位置组成。推断目的是找出该地区树木计数是如何分布的。\n", "\n", "像往常一样,我们加载数据并绘制它:" ] }, { "cell_type": "code", "execution_count": null, "id": "10cc3211", "metadata": {}, "outputs": [], "source": [ "rw_df = pd.read_csv('../data/redwood.csv', header=None)\n", "_, ax = plt.subplots(figsize=(8, 8))\n", "ax.plot(rw_df[0], rw_df[1], 'C0.')\n", "ax.set_xlabel('x1 coordinate')\n", "ax.set_ylabel('x2 coordinate')" ] }, { "cell_type": "markdown", "id": "04119b0d", "metadata": {}, "source": [ "
图 7.16
\n", "\n", "同煤矿灾难示例类似,需要对数据进行离散化:" ] }, { "cell_type": "code", "execution_count": null, "id": "3efe9a3e", "metadata": {}, "outputs": [], "source": [ "# discretize spatial data\n", "bins = 20\n", "hist, x1_edges, x2_edges = np.histogram2d(\n", " rw_df[1].values, rw_df[0].values, bins=bins)\n", "# compute the location of the centers of the discretized data\n", "x1_centers = x1_edges[:-1] + (x1_edges[1] - x1_edges[0]) / 2\n", "x2_centers = x2_edges[:-1] + (x2_edges[1] - x2_edges[0]) / 2\n", "# arrange xdata into proper shape for GP\n", "x_data = [x1_centers[:, None], x2_centers[:, None]]\n", "# arrange ydata into proper shape for GP\n", "y_data = hist.flatten()" ] }, { "cell_type": "markdown", "id": "8a45eada", "metadata": {}, "source": [ "请注意,我们将 x1 和 x2 数据视为分开的数据,而不是mesh网格。这允许我们为每个坐标构建协方差矩阵,从而减小计算高斯过程所需矩阵的大小。在使用 `LatentKron` 类定义高斯过程时,需要注意,这不是一个数字技巧,而是这类矩阵结构的数学属性,因此我们不会在模型中引入任何近似或误差,而只是用一种更快的计算方式来表达它:" ] }, { "cell_type": "code", "execution_count": null, "id": "bdee5b36", "metadata": {}, "outputs": [], "source": [ "with pm.Model() as model_rw:\n", " ℓ = pm.HalfNormal('ℓ', rw_df.std().values, shape=2)\n", " cov_func1 = pm.gp.cov.ExpQuad(1, ls=ℓ[0])\n", " cov_func2 = pm.gp.cov.ExpQuad(1, ls=ℓ[1])\n", " 高斯过程= pm.gp.LatentKron(cov_funcs=[cov_func1, cov_func2])\n", " f = gp.prior('f', Xs=x_data)\n", " y = pm.Poisson('y', mu=pm.math.exp(f), observed=y_data)\n", "\n", " trace_rw = pm.sample(1000)" ] }, { "cell_type": "markdown", "id": "9209b493", "metadata": {}, "source": [ "最后,绘制结果图:" ] }, { "cell_type": "code", "execution_count": null, "id": "9dc37897", "metadata": {}, "outputs": [], "source": [ "rate = np.exp(np.mean(trace_rw['f'], axis=0).reshape((bins, -1)))\n", "fig, ax = plt.subplots(figsize=(6, 6))\n", "ims = ax.imshow(rate, origin='lower')\n", "ax.grid(False)\n", "ticks_loc = np.linspace(0, bins-1, 6)\n", "ticks_lab = np.linspace(0, 1, 6).round(1)\n", "ax.set_xticks(ticks_loc)\n", "ax.set_yticks(ticks_loc)\n", "ax.set_xticklabels(ticks_lab)\n", "ax.set_yticklabels(ticks_lab)\n", "cbar = fig.colorbar(ims, fraction=0.046, pad=0.04)" ] }, { "cell_type": "markdown", "id": "c1e471c3", "metadata": {}, "source": [ "
图 7.17
\n", "\n", "图 7.17 中,较浅的颜色意味着比较深颜色有更高的树木比率。可以想象,我们对寻找高增长率地区很感兴趣,因为可能对森林是如何从火灾中恢复的感兴趣,或者我们对土壤性质感兴趣,也可以把树木当作代理。\n", "\n", "## 7.7 总结\n", "\n", " 高斯过程是多维高斯分布向无限维的推广,并且由均值函数和协方差函数完全指定。由于在概念上可以把函数看作无限长的向量,所以可使用高斯过程作为函数的先验。在实践中,通常处理的不是无限多的对象,而是维数与数据点一样多的多维高斯分布。为定义相应的协方差函数,我们使用了适当的参数化核;通过学习这些超参数,最终来描述任意复杂的函数。\n", "\n", " 本章简要介绍了高斯过程,还有很多与之相关的主题需要进一步学习(比如构建一个半参数化模型,将线性模型作为均值函数),或是将两个或者多个核函数组合在一起来描述未知函数,或将高斯过程用于分类任务,或是如何将高斯过程与统计学或者机器学习中的其他模型联系起来。不管怎么说,希望本章对高斯过程的介绍以及本书中一些其他主题的介绍能够激励你阅读、使用和进一步学习贝叶斯统计。\n", "\n", "\n", "## 7.8 练习\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505210819_20.webp)\n", "\n", "![](https://gitee.com/XiShanSnow/imagebed/raw/master/images/articles/spatialPresent_20210505210830_d3.webp)" ] } ], "metadata": { "jupytext": { "formats": "ipynb,md:myst", "text_representation": { "extension": ".md", "format_name": "myst", "format_version": 0.13, "jupytext_version": "1.11.5" } }, "kernelspec": { "display_name": "Python 3", "language": "python3", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.8.12" }, "source_map": [ 13, 72, 84, 97, 142, 146, 150, 173, 185, 202, 270, 279, 289, 305, 317, 319, 325, 329, 333, 336, 340, 345, 351, 358, 364, 380, 396, 399, 409, 412, 422, 429, 451, 464, 468, 481, 491, 500, 508, 539, 551, 554, 564, 569, 573, 583, 587, 593, 597, 608, 612, 630, 638, 640, 644, 657, 661, 681, 694, 701, 712, 721, 725, 742, 758, 761, 780, 791, 795, 804, 808, 819, 833, 839, 845, 857, 861, 871, 875, 887 ] }, "nbformat": 4, "nbformat_minor": 5 }