1
0

14 Revīzijas 80727a7358 ... 93030a8299

Autors SHA1 Ziņojums Datums
  )s 93030a8299 Update chapter16.md to v2 1 gadu atpakaļ
  )s aefc5a22ae Update chapter16.md 1 gadu atpakaļ
  )s 7ba6cf4f6c Update chapter15.md to v2 1 gadu atpakaļ
  )s 55a275e705 Update chapter14.md to v2 1 gadu atpakaļ
  )s 05eac7d4ce Update chapter13.md to v2 1 gadu atpakaļ
  )s c2f5650cba Update chapter12.md to v2 1 gadu atpakaļ
  )s 41edd56f3a Update chapter11.md to v2 1 gadu atpakaļ
  )s 445872d5f3 Update chapter10.md to v2 1 gadu atpakaļ
  )s 1205ce3ed8 Update chapter9.md to v2 1 gadu atpakaļ
  )s 439062d609 Update chapter8.md to v2 1 gadu atpakaļ
  )s 7c20e2a305 Update chapter7.md to v2 1 gadu atpakaļ
  )s bfe8a85131 Update chapter6.md to v2 1 gadu atpakaļ
  )s bce7cabfe7 Update chapter5.md to v2 1 gadu atpakaļ
  )s 87114ffe7a Update chapter4.md to v2 1 gadu atpakaļ

Failā izmaiņas netiks attēlotas, jo tās ir par lielu
+ 0 - 162
docs/chapter10/chapter10.md


+ 0 - 70
docs/chapter11/chapter11.md

@@ -27,10 +27,7 @@ $D^v$ 的信息樀的差。樀用来衡量一个系统的混舌程度,
 
 
 $$
-
-
 \operatorname{Ent}(D)=-\sum_{i=1}^{| \mathcal{Y |}} p_{k} \log _{2} p_{k}
-
 $$
 
 
@@ -136,10 +133,7 @@ $\nabla f(\boldsymbol{w})=-\sum_{i=1}^m 2\left(y^i-\boldsymbol{w}^{\top} \boldsy
 
 
 $$
-
-
 \frac{\left\|\nabla f\left(\boldsymbol{x}^{\prime}\right)-\nabla f(\boldsymbol{x})\right\|_2}{\left\|\boldsymbol{x}^{\prime}-\boldsymbol{x}\right\|_2} \leqslant L, \quad\left(\forall \boldsymbol{x}, \boldsymbol{x}^{\prime}\right)
-
 $$
 
 
@@ -148,10 +142,7 @@ $$
 
 
 $$
-
-
 \lim _{\boldsymbol{x}^{\prime} \rightarrow \boldsymbol{x}} \frac{\left\|\nabla f\left(\boldsymbol{x}^{\prime}\right)-\nabla f(\boldsymbol{x})\right\|_2}{\left\|\boldsymbol{x}^{\prime}-\boldsymbol{x}\right\|_2}
-
 $$
 
 
@@ -172,10 +163,7 @@ LASSO回归的联系和区别,该式中的$x$对应到式11.7的$w$,即我
 
 
 $$
-
-
 \left\vert\nabla f\left(\boldsymbol{x}^{\prime}\right)-\nabla f(\boldsymbol{x})\right\vert \leqslant L\left\vert\boldsymbol{x}^{\prime}-\boldsymbol{x}\right\vert \quad\left(\forall \boldsymbol{x}, \boldsymbol{x}^{\prime}\right)
-
 $$
 
 
@@ -184,10 +172,7 @@ $$
 
 
 $$
-
-
 \frac{\left|\nabla f\left(\boldsymbol{x}^{\prime}\right)-\nabla f(\boldsymbol{x})\right|}{\vert x^\prime - x\vert}\leqslant L \quad\left(\forall \boldsymbol{x}, \boldsymbol{x}^{\prime}\right)
-
 $$
 
 
@@ -196,10 +181,7 @@ $$
 
 
 $$
-
-
 \nabla^2f(x)\leqslant L
-
 $$
 
 
@@ -208,8 +190,6 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \hat{f}(\boldsymbol{x}) & \simeq f\left(\boldsymbol{x}_{k}\right)+\left\langle\nabla f\left(\boldsymbol{x}_{k}\right), \boldsymbol{x}-\boldsymbol{x}_{k}\right\rangle+\frac{\nabla^2f(x_k)}{2}\left\|\boldsymbol{x}-\boldsymbol{x}_{k}\right\|^{2} \\
 &\leqslant
@@ -220,7 +200,6 @@ $$
 &=f(x_k)+\frac{L}{2}\left(\left(\boldsymbol{x}-\boldsymbol{x}_{k}\right)+\frac{1}{L} \nabla f\left(\boldsymbol{x}_{k}\right)\right)^{\top}\left(\left(\boldsymbol{x}-\boldsymbol{x}_{k}\right)+\frac{1}{L} \nabla f\left(\boldsymbol{x}_{k}\right)\right)-\frac{1}{2L}\nabla f(x_k)^\top\nabla f(x_k)\\
 &=\frac{L}{2}\left\|\boldsymbol{x}-\left(\boldsymbol{x}_{k}-\frac{1}{L} \nabla f\left(\boldsymbol{x}_{k}\right)\right)\right\|_{2}^{2}+\mathrm{const}
 \end{aligned}
-
 $$
 
 
@@ -238,10 +217,7 @@ $$
 
 
 $$
-
-
 \hat{f}\left(\boldsymbol{x}_k-\frac{1}{L} \nabla f\left(\boldsymbol{x}_k\right)\right) \leqslant \hat{f}\left(\boldsymbol{x}_k\right)
-
 $$
 
 
@@ -253,10 +229,7 @@ $f(\boldsymbol{x}) \leqslant \hat{f}(\boldsymbol{x})$ 恒成立, 因此,
 
 
 $$
-
-
 f\left(\boldsymbol{x}_k-\frac{1}{L} \nabla f\left(\boldsymbol{x}_k\right)\right) \leqslant \hat{f}\left(\boldsymbol{x}_k-\frac{1}{L} \nabla f\left(\boldsymbol{x}_k\right)\right)
-
 $$
 
 
@@ -268,10 +241,7 @@ $f\left(\boldsymbol{x}_k\right)=\hat{f}\left(\boldsymbol{x}_k\right)$,
 
 
 $$
-
-
 f\left(\boldsymbol{x}_k-\frac{1}{L} \nabla f\left(\boldsymbol{x}_k\right)\right) \leqslant \hat{f}\left(\boldsymbol{x}_k-\frac{1}{L} \nabla f\left(\boldsymbol{x}_k\right)\right) \leqslant \hat{f}\left(\boldsymbol{x}_k\right)=f\left(\boldsymbol{x}_k\right)
-
 $$
 
 
@@ -300,14 +270,11 @@ $\hat{g}(\boldsymbol{x})=\hat{f}(\boldsymbol{x})+\lambda\|\boldsymbol{x}\|_{1^{\
 令优化函数 
 
 $$
-
-
 \begin{aligned}
 g(\boldsymbol{x}) &=\frac{L}{2}\|\boldsymbol{x}-\boldsymbol{z}\|_{2}^{2}+\lambda\|\boldsymbol{x}\|_{1} \\
 &=\frac{L}{2} \sum_{i=1}^{d}\left\|x^{i}-z^{i}\right\|_{2}^{2}+\lambda \sum_{i=1}^{d}\left\|x^{i}\right\|_{1} \\
 &=\sum_{i=1}^{d}\left(\frac{L}{2}\left(x^{i}-z^{i}\right)^{2}+\lambda\left|x^{i}\right|\right)
 \end{aligned}
-
 $$
 
 
@@ -317,10 +284,7 @@ $$
 
 
 $$
-
-
 g\left(x^{i}\right)=\frac{L}{2}\left(x^{i}-z^{i}\right)^{2}+\lambda\left|x^{i}\right|
-
 $$
 
 
@@ -330,10 +294,7 @@ $$
 
 
 $$
-
-
 \frac{d g\left(x^{i}\right)}{d x^{i}}=L\left(x^{i}-z^{i}\right)+\lambda s g n\left(x^{i}\right)
-
 $$
 
 
@@ -342,13 +303,10 @@ $$
 其中 
 
 $$
-
-
 \operatorname{sign}\left(x^{i}\right)=\left\{\begin{array}{ll}
 {1,} & {x^{i}>0} \\
 {-1,} & {x^{i}<0}
 \end{array}\right.
-
 $$
 
 
@@ -358,10 +316,7 @@ $$
 
 
 $$
-
-
 x^{i}=z^{i}-\frac{\lambda}{L} \operatorname{sign}\left(x^{i}\right)
-
 $$
 
 
@@ -378,10 +333,7 @@ $$
     
 
 $$
-
-
 \frac{d^2 g\left(x^{i}\right)}{{d x^{i}}^2}=L
-
 $$
 
 
@@ -394,20 +346,17 @@ $$
 (4)最后讨论$x^i=0$的情况,此时$g(x^i)=\frac{L}{2}\left({z^i}\right)^2$。当$\vert z^i\vert>\frac{\lambda}{L}$时,由上述推导可知$g(x^i)$的最小值在$x^i=z^i-\frac{\lambda}{L}$处取得,因为
         
 $$
-
 \begin{aligned}
            g(x^i)\vert_{x^i=0}-g(x^i)\vert_{x^i=z^i-\frac{\lambda}{L}}
            &=\frac{L}{2}\left({z^i}\right)^2 - \left(\lambda z^i-\frac{\lambda^2}{2L}\right)\\
            &=\frac{L}{2}\left(z^i-\frac{\lambda}{L}\right)^2\\
            &>0
            \end{aligned}
-
 $$
 
 因此当$\vert z^i\vert>\frac{\lambda}{L}$时,$x^i=0$不会是函数$g(x^i)$的最小值。当$-\frac{\lambda}{L} \leqslant z^i \leqslant \frac{\lambda}{L}$时,对于任何$\Delta x\neq 0$有
 
 $$
-
 \begin{aligned}
            g(\Delta x) &=\frac{L}{2}\left(\Delta x-z^{i}\right)^{2}+\lambda|\Delta x| \\
            &=\frac{L}{2}\left((\Delta x)^{2}-2 \Delta x \cdot z^{i}+\frac{2 \lambda}{L}|\Delta x|\right)+\frac{L}{2}\left(z^{i}\right)^{2} \\
@@ -415,7 +364,6 @@ $$
            &\ge\frac{L}{2}\left(\Delta x\right)^2+\frac{L}{2}\left(z^{i}\right)^{2}\\
            &>g(x^i)\vert_{x^i=0}
            \end{aligned}
-
 $$
 
 因此$x^i=0$是$g(x^i)$的最小值点。
@@ -446,8 +394,6 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \boldsymbol B\boldsymbol A
 & =\begin{bmatrix}
@@ -475,7 +421,6 @@ b_{1}^{2} &b_{2}^{2}  & \cdot  & \cdot  & \cdot  & b_{k}^{2}\\
 \sum_{j=1}^{k}b_{j}^{d}\alpha _{1}^{j}& \sum_{j=1}^{k}b_{j}^{d}\alpha _{2}^{j}  & \cdot  & \cdot  &\cdot   &  \sum_{j=1}^{k}b_{j}^{d}\alpha _{m}^{j}
 \end{bmatrix}_{d\times m} &
 \end{aligned}
-
 $$
 
 
@@ -484,8 +429,6 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \boldsymbol b_{\boldsymbol j}\boldsymbol \alpha ^{\boldsymbol j}
 & =\begin{bmatrix}
@@ -507,7 +450,6 @@ b_{j}^{2}\alpha _{1}^{j} &b_{j}^{2}\alpha _{2}^{j}  & \cdot  & \cdot  & \cdot  &
 b_{j}^{d}\alpha _{1}^{j}& b_{j}^{d}\alpha _{2}^{j}  & \cdot  & \cdot  &\cdot   &  b_{j}^{d}\alpha _{m}^{j}
 \end{bmatrix}_{d\times m} &
 \end{aligned}
-
 $$
 
 
@@ -516,8 +458,6 @@ $$
 求和可得: 
 
 $$
-
-
 \begin{aligned}
 \sum_{j=1}^{k}\boldsymbol b_{\boldsymbol j}\boldsymbol \alpha ^{\boldsymbol j} 
 & = \sum_{j=1}^{k}\left (\begin{bmatrix}
@@ -539,7 +479,6 @@ b_{j}^{1}\\ b_{j}^{2}
 \sum_{j=1}^{k}b_{j}^{d}\alpha _{1}^{j}& \sum_{j=1}^{k}b_{j}^{d}\alpha _{2}^{j}  & \cdot  & \cdot  &\cdot   &  \sum_{j=1}^{k}b_{j}^{d}\alpha _{m}^{j}
 \end{bmatrix}_{d\times m} &
 \end{aligned}
-
 $$
 
 
@@ -589,14 +528,11 @@ Codebook Update Stage, 在该步骤中分 K 次分别更新字典矩 阵 $\mathb
 如原文献式(21)所示: 
 
 $$
-
-
 \begin{aligned}
 \|\mathbf{Y}-\mathbf{D X}\|_F^2 & =\left\|\mathbf{Y}-\sum_{j=1}^K \mathbf{d}_j \mathbf{x}_T^j\right\|_F^2 \\
 & =\left\|\left(\mathbf{Y}-\sum_{j \neq k} \mathbf{d}_j \mathbf{x}_T^j\right)-\mathrm{d}_k \mathbf{x}_T^k\right\|_F^2 \\
 & =\left\|\mathbf{E}_k-\mathbf{d}_k \mathrm{x}_T^k\right\|_F^2 .
 \end{aligned}
-
 $$
 
 
@@ -625,10 +561,7 @@ $\mathbf{E}_k^R=\mathbf{U} \Delta \mathrm{V}^{\top}$, 则
 
 
 $$
-
-
 \tilde{\mathbf{d}}_k=\mathbf{U}_1, \quad \tilde{\mathbf{x}}_T^k=\boldsymbol{\Delta}(1,1) \mathbf{V}_1^{\top}
-
 $$
 
 
@@ -659,10 +592,7 @@ De-Noising、第 261 页 的 Basis Pursuit 和 Matching Pursuit 中的
 
 
 $$
-
-
 \left(1-\delta_k\right) \leqslant \frac{\left\|\mathbf{A}_k \boldsymbol{s}\right\|_2^2}{\|\boldsymbol{s}\|_2^2} \leqslant\left(1+\delta_k\right)
-
 $$
 
 

+ 0 - 162
docs/chapter12/chapter12.md

@@ -50,10 +50,7 @@ $\left|\frac{1}{m} \sum_{i=1}^m x_i-\frac{1}{m} \sum_{i=1}^m \mathbb{E}\left(x_i
 
 
 $$
-
-
 \frac{1}{m} \sum_{i=1}^m x_i-\frac{1}{m} \sum_{i=1}^m \mathbb{E}\left(x_i\right) \geqslant \epsilon \quad \vee \quad \frac{1}{m} \sum_{i=1}^m x_i-\frac{1}{m} \sum_{i=1}^m \mathbb{E}\left(x_i\right) \leqslant-\epsilon
-
 $$
 
 
@@ -77,10 +74,7 @@ $2 e^{-2 m \epsilon^2}$ )。
 McDiarmid不等式:首先解释下前提条件:
 
 $$
-
-
 \sup _{x_{1}, \ldots, x_{m}, x_{i}^{\prime}}\left|f\left(x_{1}, \ldots, x_{m}\right)-f\left(x_{1}, \ldots, x_{i-1}, x_{i}^{\prime}, x_{i+1}, \ldots, x_{m}\right)\right| \leqslant c_{i}
-
 $$
 
 
@@ -167,10 +161,7 @@ $P(h(\boldsymbol{x})=y) =1-P(h(\boldsymbol{x}) \neq y)$
 先解释什么是$h$与$D$"表现一致",12.2节开头阐述了这样的概念,如果$h$能将$D$中所有样本按与真实标记一致的方式完全分开,我们称问题对学习算法是一致的。即$\left(h\left(\boldsymbol{x}_{1}\right)=y_{1}\right) \wedge \ldots \wedge\left(h\left(\boldsymbol{x}_{m}\right)=y_{m}\right)$为True。因为每个事件是独立的,所以上式可以写成$P\left(\left(h\left(\boldsymbol{x}_{1}\right)=y_{1}\right) \wedge \ldots \wedge\left(h\left(\boldsymbol{x}_{m}\right)=y_{m}\right)\right)=\prod_{i=1}^{m} P\left(h\left(\boldsymbol{x}_{i}\right)=y_{i}\right)$。根据对立事件的定义有:$\prod_{i=1}^{m} P\left(h\left(\boldsymbol{x}_{i}\right)=y_{i}\right)=\prod_{i=1}^{m}\left(1-P\left(h\left(\boldsymbol{x}_{i}\right) \neq y_{i}\right)\right)$,又根据公式(12.10),有
 
 $$
-
-
 \prod_{i=1}^{m}\left(1-P\left(h\left(\boldsymbol{x}_{i}\right) \neq y_{i}\right)\right)<\prod_{i=1}^{m}(1-\epsilon)=(1-\epsilon)^{m}
-
 $$
 
 
@@ -182,10 +173,7 @@ $$
 
 
 $$
-
-
 \begin{aligned}P\left(h \in \mathcal{H}: E(h)>\epsilon \wedge \widehat{E}(h)=0\right) &=\sum_i^{\mathcal{\vert H\vert}}P\left(E(h_i)>\epsilon \wedge \widehat{E}(h_i)=0\right)\\&<|\mathcal{H}|(1-\epsilon)^{m}\end{aligned}
-
 $$
 
 
@@ -209,15 +197,12 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \vert\mathcal{H}\vert e^{-m \epsilon} &\leqslant \delta\\
 e^{-m \epsilon} &\leqslant \frac{\delta}{\vert\mathcal{H}\vert}\\
 -m \epsilon &\leqslant \ln\delta-\ln\vert\mathcal{H}\vert\\
 m &\geqslant \frac{1}{\epsilon}\left(\ln |\mathcal{H}|+\ln \frac{1}{\delta}\right)
 \end{aligned}
-
 $$
 
 
@@ -250,8 +235,6 @@ $E(h)=\frac{1}{m} \sum_{i=1}^m \mathbb{E}\left(\mathbb{I}\left(h\left(\boldsymbo
 
 
 $$
-
-
 \begin{aligned}
 P(|E(h)-\widehat{E}(h)| \geqslant \epsilon) &\leqslant 2 \exp \left(-2 m \epsilon^{2}\right)\\
 P(|E(h)-\widehat{E}(h)| \geqslant \epsilon) &\leqslant \delta\\
@@ -259,7 +242,6 @@ P(|E(h)-\widehat{E}(h)| \leqslant \epsilon) &\geqslant 1 - \delta\\
 P(-\epsilon \leqslant E(h)-\widehat{E}(h) \leqslant \epsilon) &\geqslant 1 - \delta\\
 P(\widehat{E}(h) -\epsilon \leqslant E(h) \leqslant \widehat{E}(h)+\epsilon) &\geqslant 1 - \delta\\
 \end{aligned}
-
 $$
 
 
@@ -275,13 +257,10 @@ $$
 
 
 $$
-
-
 \begin{aligned} 
 & P(\exists h \in \mathcal{H}:|E(h)-\widehat{E}(h)|>\epsilon) \\
 =& P\left(\left(\left|E_{h_{1}}-\widehat{E}_{h_{1}}\right|>\epsilon\right) \vee \ldots \vee\left(| E_{h_{|\mathcal{H}|}}-\widehat{E}_{h_{|\mathcal{H}|} |>\epsilon}\right)\right) \\ \leqslant & \sum_{h \in \mathcal{H}} P(|E(h)-\widehat{E}(h)|>\epsilon) 
 \end{aligned}
-
 $$
 
 
@@ -292,13 +271,10 @@ $$
 由式12.17: 
 
 $$
-
-
 \begin{aligned}
 &P(|E(h)-\widehat{E}(h)| \geqslant \epsilon) \leqslant 2 \exp \left(-2 m \epsilon^{2}\right)\\
 &\Rightarrow \sum_{h \in \mathcal{H}} P(|E(h)-\widehat{E}(h)|>\epsilon) \leqslant 2|\mathcal{H}| \exp \left(-2 m \epsilon^{2}\right)
 \end{aligned}
-
 $$
 
 
@@ -307,14 +283,11 @@ $$
 因此: 
 
 $$
-
-
 \begin{aligned}
 P(\exists h \in \mathcal{H}:|E(h)-\widehat{E}(h)|>\epsilon) 
 &\leqslant  \sum_{h \in \mathcal{H}} P(|E(h)-\widehat{E}(h)|>\epsilon)\\
 &\leqslant 2|\mathcal{H}| \exp \left(-2 m \epsilon^{2}\right)
 \end{aligned}
-
 $$
 
 
@@ -323,12 +296,9 @@ $$
 其对立事件: 
 
 $$
-
-
 \begin{aligned}
 P(\forall h\in\mathcal{H}:\vert E(h)-\widehat{E}(h)\vert\leqslant\epsilon)&=1-P(\exists h \in \mathcal{H}:|E(h)-\widehat{E}(h)|>\epsilon)\\ &\geqslant 1- 2|\mathcal{H}| \exp \left(-2 m \epsilon^{2}\right)
 \end{aligned}
-
 $$
 
 
@@ -338,10 +308,7 @@ $$
 
 
 $$
-
-
 P\left(\forall h\in\mathcal{H}:\vert E(h)-\widehat{E}(h)\vert\leqslant\sqrt{\frac{\ln |\mathcal{H}|+\ln (2 / \delta)}{2 m}}\right)\geqslant 1- \delta
-
 $$
 
 
@@ -375,10 +342,7 @@ the inequality
 
 
 $$
-
-
 \mathbf{P}\left(\pi^{(l)}>\varepsilon\right) \leqq 4 m^S(2 l) e^{-\varepsilon^2 l / 8} .
-
 $$
 
 
@@ -393,10 +357,7 @@ S 中"存在"one event 而不是 class S 中的 "任意" event。
 
 
 $$
-
-
 P(\exists h \in \mathcal{H}:|E(h)-\widehat{E}(h)|>\epsilon) \leqslant 2|\mathcal{H}| e^{-2 m \epsilon^2}
-
 $$
 
 
@@ -407,10 +368,7 @@ $$
 
 
 $$
-
-
 P(\forall h \in \mathcal{H}:|E(h)-\widehat{E}(h)| \leqslant \epsilon) \leqslant 1-2|\mathcal{H}| e^{-2 m \epsilon^2}
-
 $$
 
 
@@ -455,10 +413,7 @@ $\mathrm{VC}(\mathcal{H})=\max \left\{m: \Pi_{\mathcal{H}}(m)=2^{m}\right\}=0$
 
 
 $$
-
-
 \begin{array}{l}{\mathcal{H}_{| D}=\left\{\left(h\left(\boldsymbol{x}_{1}\right), h\left(\boldsymbol{x}_{2}\right), \ldots, h\left(\boldsymbol{x}_{m}\right)\right) | h \in \mathcal{H}\right\}} \\ {\mathcal{H}_{| D^{\prime}}=\left\{\left(h\left(\boldsymbol{x}_{1}\right), h\left(\boldsymbol{x}_{2}\right), \ldots, h\left(\boldsymbol{x}_{m-1}\right)\right) | h \in \mathcal{H}\right\}}\end{array}
-
 $$
 
 
@@ -468,13 +423,10 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \mathcal{H}_{\vert D}&=\{(+,-,-),(+,+,-),(+,+,+),(-,+,-),(-,-,+)\}\\
 \mathcal{H}_{\vert D^\prime}&=\{(+,+),(+,-),(-,+),(-,-)\}\\
 \end{aligned}
-
 $$
 
 
@@ -485,10 +437,7 @@ $$
 
 
 $$
-
-
 \left|\mathcal{H}_{| D}\right|=\left|\mathcal{H}_{| D^{\prime}}\right|+\left|\mathcal{H}_{D^{\prime} | D}\right|
-
 $$
 
 
@@ -498,10 +447,7 @@ $$
 
 
 $$
-
-
 \left|\mathcal{H}_{| D^{\prime}}\right| \leqslant \Pi_{\mathcal{H}}(m-1) \leqslant \sum_{i=0}^{d}\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right)
-
 $$
 
 
@@ -512,10 +458,7 @@ $$
 
 
 $$
-
-
 \left|\mathcal{H}_{D^{\prime}| D}\right| \leqslant \Pi_{\mathcal{H}}(m-1) \leqslant \sum_{i=0}^{d-1}\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right)
-
 $$
 
 
@@ -524,15 +467,12 @@ $$
 因此: 
 
 $$
-
-
 \begin{aligned}
 \left|\mathcal{H}_{| D}\right|&=\left|\mathcal{H}_{| D^{\prime}}\right|+\left|\mathcal{H}_{D^{\prime} | D}\right|\\
 &\leqslant \sum_{i=0}^{d}\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right) + \sum_{i=0}^{d+1}\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right)\\
 &=\sum_{i=0}^d \left(\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right) + \left(\begin{array}{c}{m-1} \\ {i-1}\end{array}\right)\right)\\
 &=\sum_{i=0}^{d}\left(\begin{array}{c}{m} \\ {i}\end{array}\right)
 \end{aligned}
-
 $$
 
 
@@ -542,10 +482,7 @@ $$
 
 
 $$
-
-
 \begin{aligned}\left(\begin{array}{c}{m-1} \\ {i}\end{array}\right)+\left(\begin{array}{c}{m-1} \\ {i-1}\end{array}\right) &=\frac{(m-1) !}{(m-1-i) ! i !}+\frac{(m-1) !}{(m-1-i+1) !(i-1) !} \\ &=\frac{(m-1) !(m-i)}{(m-i)(m-1-i) ! i !}+\frac{(m-1) ! i}{(m-i) !(i-1) ! i} \\ &=\frac{(m-1) !(m-i)+(m-1) ! i}{(m-i) ! i !} \\ &=\frac{(m-1) !(m-i+i)}{(m-i) ! i !}=\frac{(m-1) ! m}{(m-i) ! i !} \\ &=\frac{m !}{(m-i) ! i !}=\left(\begin{array}{c}{m} \\ {i}\end{array}\right) \end{aligned}
-
 $$
 
 
@@ -556,12 +493,9 @@ $$
 
 
 $$
-
-
 \begin{aligned} \Pi_{\mathcal{H}}(m) & \leqslant \sum_{i=0}^{d}\left(\begin{array}{c}{m} \\ {i}\end{array}\right) \\ & \leqslant \sum_{i=0}^{d}\left(\begin{array}{c}{m} \\ {i}\end{array}\right)\left(\frac{m}{d}\right)^{d-i} \\ &=\left(\frac{m}{d}\right)^{d} \sum_{i=0}^{d}\left(\begin{array}{c}{m} \\ {i}\end{array}\right)\left(\frac{d}{m}\right)^{i} \\ & \leqslant\left(\frac{m}{d}\right)^{d} \sum_{i=0}^{m}\left(\begin{array}{c}{m} \\ {i}\end{array}\right)\left(\frac{d}{m}\right)^{i} \\ 
 &={\left(\frac{m}{d}\right)}^d{\left(1+\frac{d}{m}\right)}^m\\
 &<\left(\frac{e \cdot m}{d}\right)^{d} \end{aligned}
-
 $$
 
 
@@ -575,13 +509,10 @@ $$
 
 
 $$
-
-
 P\left(\vert 
 E(h)-\widehat{E}(h) \vert> \epsilon
 \right) 
 \leqslant 4{\left(\frac{2em}{d}\right)}^d\exp\left(-\frac{m\epsilon^2}{8}\right)
-
 $$
 
 
@@ -590,12 +521,9 @@ $$
 
 
 $$
-
-
 \delta=\sqrt{
 \frac{8d\ln\frac{2em}{d}+8\ln\frac{4}{\delta}}{m}
 }
-
 $$
 
 
@@ -613,13 +541,10 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 \delta^\prime = \frac{\delta}{2} \\
 \sqrt{\frac{\left(\ln 2 / \delta^{\prime}\right)}{2 m}}=\frac{\epsilon}{2}
 \end{aligned}
-
 $$
 
 
@@ -627,10 +552,7 @@ $$
 
 
 $$
-
-
 \widehat{E}(g)-\frac{\epsilon}{2} \leqslant E(g) \leqslant \widehat{E}(g)+\frac{\epsilon}{2}
-
 $$
 
 
@@ -639,10 +561,7 @@ $$
 
 
 $$
-
-
 P\left(|E(g)-\widehat{E}(g)| \leqslant \frac{\epsilon}{2}\right) \geqslant 1-\delta / 2
-
 $$
 
 
@@ -652,10 +571,7 @@ $$
 
 
 $$
-
-
 \sqrt{\frac{8 d \ln \frac{2 e m}{d}+8 \ln \frac{4}{\delta^{\prime}}}{m}}=\frac{\epsilon}{2}
-
 $$
 
 
@@ -663,13 +579,10 @@ $$
 由式12.29可知 
 
 $$
-
-
 P\left(\left\vert 
 E(h)-\widehat{E}(h) \right\vert\leqslant \frac{\epsilon}{2}
 \right)
 \geqslant 1-\frac{\delta}{2}
-
 $$
 
 
@@ -680,8 +593,6 @@ $$
 
 
 $$
-
-
 \begin{aligned}
 &P\left(
 \left(E(g)-\widehat{E}(g) \geqslant -\frac{\epsilon}{2} \right)\wedge\left(E(h)-\widehat{E}(h) \leqslant \frac{\epsilon}{2}
@@ -694,32 +605,25 @@ P\left(E(h)-\widehat{E}(h) \leqslant \frac{\epsilon}{2}\right)
 \\\geqslant &1 - \delta/2 + 1 - \delta/2 - 1
 \\=& 1-\delta
 \end{aligned}
-
 $$
 
 
 
 $$
-
-
 P\left(
 \left(E(g)-\widehat{E}(g) \geqslant -\frac{\epsilon}{2} \right)\wedge\left(E(h)-\widehat{E}(h) \leqslant \frac{\epsilon}{2}
 \right)\right) \geqslant 1-\delta
-
 $$
 
 
  因此 
 
 $$
-
-
 P\left(
 \widehat{E}(g)-E(g)+E(h)-\widehat{E}(h)\leqslant\frac{\epsilon}{2} + \frac{\epsilon}{2} 
 \right) = P\left(E(h)-E(g)\leqslant\widehat{E}(h)-\widehat{E}(g)+\epsilon\right)
 \geqslant 1 - \delta
-
 $$
 
 
@@ -728,11 +632,8 @@ $$
 
 
 $$
-
-
 P\left(E(h)-E(g)\leqslant\epsilon\right)
 \geqslant 1 - \delta
-
 $$
 
 
@@ -784,14 +685,11 @@ $\boldsymbol{\sigma}$ 仅为本次随机生成结果而已, 下次生
 $\mathcal{H}=\left\{h_1, h_2, h_3\right\}$, 其中 
 
 $$
-
-
 \begin{aligned}
 & \left\{h_1\left(\boldsymbol{x}_1\right), h_1\left(\boldsymbol{x}_2\right), h_1\left(\boldsymbol{x}_3\right), h_1\left(\boldsymbol{x}_4\right)\right\}=\{-1,-1,-1,-1\} \\
 & \left\{h_2\left(\boldsymbol{x}_1\right), h_2\left(\boldsymbol{x}_2\right), h_2\left(\boldsymbol{x}_3\right), h_2\left(\boldsymbol{x}_4\right)\right\}=\{-1,+1,-1,-1\} \\
 & \left\{h_3\left(\boldsymbol{x}_1\right), h_3\left(\boldsymbol{x}_2\right), h_3\left(\boldsymbol{x}_3\right), h_3\left(\boldsymbol{x}_4\right)\right\}=\{+1,+1,+1,+1\}
 \end{aligned}
-
 $$
 
 
@@ -801,10 +699,7 @@ $\frac{1}{m} \sum_{i=1}^m \sigma_i h_1\left(\boldsymbol{x}_i\right)=0, \frac{1}{
 
 
 $$
-
-
 \sup _{h \in \mathcal{H}} \frac{1}{m} \sum_{i=1}^m \sigma_i h\left(\boldsymbol{x}_i\right)=\frac{2}{4}
-
 $$
 
 
@@ -815,10 +710,7 @@ $$
 
 
 $$
-
-
 \mathbb{E}_{\boldsymbol{\sigma}}\left[\sup _{h \in \mathcal{H}} \frac{1}{m} \sum_{i=1}^{m} \sigma_{i} h\left(\boldsymbol{x}_{i}\right)\right]
-
 $$
 
 
@@ -839,10 +731,7 @@ $$
 
 
 $$
-
-
 \begin{aligned} \widehat{E}_{Z}(f) &=\frac{1}{m} \sum_{i=1}^{m} f\left(\boldsymbol{z}_{i}\right) \\ \Phi(Z) &=\sup _{f \in \mathcal{F}} \left(\mathbb{E}[f]-\widehat{E}_{Z}(f)\right) \end{aligned}
-
 $$
 
 
@@ -852,10 +741,7 @@ $$
 
 
 $$
-
-
 \begin{aligned} \Phi\left(Z^{\prime}\right)-\Phi(Z) &=\sup _{f \in \mathcal{F}} \left(\mathbb{E}[f]-\widehat{E}_{Z^{\prime}}(f)\right)-\sup _{f \in \mathcal{F}} \left(\mathbb{E}[f]-\widehat{E}_{Z}(f)\right) \\ & \leqslant \sup _{f \in \mathcal{F}} \left(\widehat{E}_{Z}(f)-\widehat{E}_{Z^{\prime}}(f)\right) \\ &=\sup_{f\in\mathcal{F}}\frac{\sum^m_{i=1}f(z_i)-\sum^m_{i=1}f(z^\prime_i)}{m}\\&=\sup _{f \in \mathcal{F}} \frac{f\left(z_{m}\right)-f\left(z_{m}^{\prime}\right)}{m} \\ & \leqslant \frac{1}{m} \end{aligned}
-
 $$
 
 
@@ -866,10 +752,7 @@ $$
 
 
 $$
-
-
 \Phi(Z)-\Phi\left(Z^{\prime}\right) =\sup _{f \in \mathcal{F}} \frac{f\left(z_{m}^\prime\right)-f\left(z_{m}\right)}{m} \leqslant \frac{1}{m}
-
 $$
 
 
@@ -879,10 +762,7 @@ $$
 
 
 $$
-
-
 \left\vert \Phi(Z)-\Phi\left(Z^{\prime}\right)\right\vert \leqslant \frac{1}{m}
-
 $$
 
 
@@ -892,10 +772,7 @@ $$
 
 
 $$
-
-
 P\left(\Phi(Z)-\mathbb{E}_{Z}[\Phi(Z)] \geqslant \epsilon\right) \leqslant \exp \left(\frac{-2 \epsilon^{2}}{\sum_{i} c_{i}^{2}}\right)
-
 $$
 
 
@@ -905,10 +782,7 @@ $$
 
 
 $$
-
-
 P\left(\Phi(Z)-\mathbb{E}_{Z}[\Phi(Z)] \geqslant \sqrt{\frac{\ln (1 / \delta)}{2 m}}\right) \leqslant \delta
-
 $$
 
 
@@ -918,10 +792,7 @@ $$
 
 
 $$
-
-
 P\left(\Phi(Z)-\mathbb{E}_{Z}[\Phi(Z)] \leqslant \sqrt{\frac{\ln (1 / \delta)}{2 m}}\right) \geqslant 1-\delta
-
 $$
 
 
@@ -931,10 +802,7 @@ $$
 
 
 $$
-
-
 \begin{aligned} \mathbb{E}_{Z}[\Phi(Z)] &=\mathbb{E}_{Z}\left[\sup _{f \in \mathcal{F}} \left(\mathbb{E}[f]-\widehat{E}_{Z}(f)\right)\right] \\ &=\mathbb{E}_{Z}\left[\sup _{f \in \mathcal{F}} \mathbb{E}_{Z^{\prime}}\left[\widehat{E}_{Z^{\prime}}(f)-\widehat{E}_{Z}(f)\right]\right] \\ & \leqslant \mathbb{E}_{Z, Z^{\prime}}\left[\sup _{f \in \mathcal{F}}\left( \widehat{E}_{Z^{\prime}}(f)-\widehat{E}_{Z}(f)\right)\right] \\ &=\mathbb{E}_{Z, Z^{\prime}}\left[\sup _{f \in \mathcal{F}} \frac{1}{m} \sum_{i=1}^{m}\left(f\left(\boldsymbol{z}_{i}^{\prime}\right)-f\left(\boldsymbol{z}_{i}\right)\right)\right] \\ &=\mathbb{E}_{\boldsymbol{\sigma}, Z,Z^{\prime}}\left[\sup _{f \in \mathcal{F}} \frac{1}{m} \sum_{i=1}^{m} \sigma_{i}\left(f\left(\boldsymbol{z}_{i}^{\prime}\right)-f\left(\boldsymbol{z}_{i}\right)\right)\right] \\ &\leqslant \mathbb{E}_{\boldsymbol{\sigma}, Z^{\prime}}\left[\sup _{f \in \mathcal{F}} \frac{1}{m} \sum_{i=1}^{m} \sigma_{i} f\left(\boldsymbol{z}_{i}^{\prime}\right)\right]+\mathbb{E}_{\boldsymbol{\sigma}, Z}\left[\sup _{f \in \mathcal{F}} \frac{1}{m} \sum_{i=1}^{m}-\sigma_{i} f\left(\boldsymbol{z}_{i}\right)\right] \\ &=2 \mathbb{E}_{\boldsymbol{\sigma}, Z}\left[\sup _{f \in \mathcal{F}} \frac{1}{m} \sum_{i=1}^{m} \sigma_{i} f\left(\boldsymbol{z}_{i}\right)\right] \\ &=2 R_{m}(\mathcal{F}) \end{aligned}
-
 $$
 
 
@@ -993,10 +861,7 @@ $\mathbb{I}\left(h\left(\boldsymbol{x}_i\right) \neq y_i\right)$; 第 3
 
 
 $$
-
-
 \sup _{h \in \mathcal{H}} \frac{1}{m} \sum_{i=1}^m \sigma_i \frac{1-y_i h\left(\boldsymbol{x}_i\right)}{2}=\sup _{h \in \mathcal{H}} \frac{1}{2 m} \sum_{i=1}^m \sigma_i+\sup _{h \in \mathcal{H}} \frac{1}{2 m} \sum_{i=1}^m \frac{-y_i \sigma_i h\left(\boldsymbol{x}_i\right)}{2}
-
 $$
 
 
@@ -1018,10 +883,7 @@ $\mathbb{E}_Z[\Phi(Z)] \leqslant 2 R_m(\mathcal{F})$ 相同,
 
 
 $$
-
-
 R_m\left(\mathcal{F}_{\mathcal{H}}\right)=\mathbb{E}_Z\left[\widehat{R}_Z\left(\mathcal{F}_{\mathcal{H}}\right)\right]=\mathbb{E}_D\left[\frac{1}{2} \widehat{R}_D(\mathcal{H})\right]=\frac{1}{2} \mathbb{E}_D\left[\widehat{R}_D(\mathcal{H})\right]=\frac{1}{2} R_m(\mathcal{H})
-
 $$
 
 
@@ -1032,13 +894,10 @@ $$
 
 
 $$
-
-
 \begin{gathered}
 \mathbb{E}[f(\boldsymbol{z})]=\mathbb{E}[\mathbb{I}(h(\boldsymbol{x}) \neq y)]=E(h) \\
 \frac{1}{m} \sum_{i=1}^m f\left(\boldsymbol{z}_i\right)=\frac{1}{m} \sum_{i=1}^m \mathbb{I}\left(h\left(\boldsymbol{x}_i\right) \neq y_i\right)=\widehat{E}(h)
 \end{gathered}
-
 $$
 
 
@@ -1137,10 +996,7 @@ Minimization)。由于$\mathcal{L}$满足经验误差最小化,则可令$g$表
 
 
 $$
-
-
 \ell(g, \mathcal{D})=\min _{h \in \mathcal{H}} \ell(h, \mathcal{D})
-
 $$
 
 
@@ -1150,10 +1006,7 @@ $$
 
 
 $$
-
-
 \begin{array}{l}{\epsilon^{\prime}=\frac{\epsilon}{2}} \\ {\frac{\delta}{2}=2 \exp \left(-2 m\left(\epsilon^{\prime}\right)^{2}\right)}\end{array}
-
 $$
 
 
@@ -1162,10 +1015,7 @@ $$
 将$\epsilon^\prime=\frac{\epsilon}{2}$带入到${\frac{\delta}{2}=2 \exp \left(-2 m\left(\epsilon^{\prime}\right)^{2}\right)}$可以解得$m=\frac{2}{\epsilon^{2}} \ln \frac{4}{\delta}$,由Hoeffding不等式12.6,
 
 $$
-
-
 P\left(\left\vert\frac{1}{m} \sum_{i=1}^{m} x_{i}-\frac{1}{m} \sum_{i=1}^{m} \mathbb{E}\left(x_{i}\right)\right\vert \geqslant \epsilon\right) \leqslant 2 \exp \left(-2 m \epsilon^{2}\right)
-
 $$
 
 
@@ -1174,10 +1024,7 @@ $$
 
 
 $$
-
-
 P(|\ell(g, \mathcal{D})-\widehat{\ell}(g, D)| \geqslant \frac{\epsilon}{2})\leqslant \frac{\delta}{2}
-
 $$
 
 
@@ -1187,10 +1034,7 @@ $$
 
 
 $$
-
-
 P(|\ell(g, \mathcal{D})-\widehat{\ell}(g, D)| \leqslant \frac{\epsilon}{2})\geqslant 1- \frac{\delta}{2}
-
 $$
 
 
@@ -1202,10 +1046,7 @@ $$
 
 
 $$
-
-
 \sqrt{m}=\frac{(4+M) \sqrt{\frac{\ln (2 / \delta)}{2}}+\sqrt{(4+M)^{2} \frac{\ln (2 / \delta)}{2}-4 \times \frac{\epsilon}{2} \times(-2)}}{2 \times \frac{\epsilon}{2}}
-
 $$
 
 
@@ -1217,10 +1058,7 @@ $$
 
 
 $$
-
-
 P(\ell(\mathfrak{L}, \mathcal{D})-\ell(g, \mathcal{D})\leqslant\epsilon)\geqslant 1-\delta
-
 $$
 
 

+ 0 - 116
docs/chapter13/chapter13.md

@@ -46,9 +46,7 @@ $\sum_{i=1}^N$以抵消引入的影响。从公式第 2 行到第 3 行推导如
 
 
 $$
-
 \begin{aligned}p(y=j, \Theta=i | \boldsymbol{x}) &=\frac{p(y=j, \Theta=i, \boldsymbol{x})}{p(\boldsymbol{x})} \\&=\frac{p(y=j, \Theta=i, \boldsymbol{x})}{p(\Theta=i, \boldsymbol{x})} \cdot \frac{p(\Theta=i, \boldsymbol{x})}{p(\boldsymbol{x})} \\&=p(y=j | \Theta=i, \boldsymbol{x}) \cdot p(\Theta=i | \boldsymbol{x})\end{aligned}
-
 $$
 
 
@@ -77,9 +75,7 @@ $\boldsymbol{x}$ 的条件概率(已知 $\Theta$ 就足够, 不需 $\boldsymbo
 
 
 $$
-
 p(y=j \mid \Theta=i, \boldsymbol{x})= \begin{cases}1, & i=j \\ 0, & i \neq j\end{cases}
-
 $$
 
 
@@ -90,9 +86,7 @@ $$
 
 
 $$
-
 p(\boldsymbol{x})=\sum_{i=1}^{N} \alpha_{i} \cdot p\left(\boldsymbol{x} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)
-
 $$
 
 
@@ -101,9 +95,7 @@ $$
 
 
 $$
-
 \begin{aligned}p(\Theta=i | \boldsymbol{x})&=\frac{p(\Theta=i , \boldsymbol{x})}{P(x)}\\&=\frac{\alpha_{i} \cdot p\left(\boldsymbol{x} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\sum_{i=1}^{N} \alpha_{i} \cdot p\left(\boldsymbol{x} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}\end{aligned}
-
 $$
 
 
@@ -115,9 +107,7 @@ $$
 
 
 $$
-
 p(y=j | \Theta=i, \boldsymbol{x})=\left\{\begin{array}{ll}1, & i=j \\0, & i \neq j\end{array}\right.
-
 $$
 
 
@@ -130,9 +120,7 @@ $$
 
 
 $$
-
 \begin{array}{l}\alpha_{i}=\frac{l_{i}}{\left|D_{l}\right|}, \text { where }\left|D_{l}\right|=\sum_{i=1}^{N} l_{i} \\\boldsymbol{\mu}_{i}=\frac{1}{l_{i}} \sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{x}_{j} \\\boldsymbol{\Sigma}_{i}=\frac{1}{l_{i}} \sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}\end{array}
-
 $$
 
 
@@ -145,9 +133,7 @@ $$
 这项可以由
 
 $$
-
 \cfrac{\partial LL(D_l \cup D_u) }{\partial \mu_i}=0
-
 $$
 
 而得,将式
@@ -155,9 +141,7 @@ $$
 
 
 $$
-
 \begin{aligned}LL(D_l)&=\sum_{(\boldsymbol{x_j},y_j \in D_l)}\ln\left(\sum_{s=1}^{N}\alpha_s \cdot p(\boldsymbol{x_j}\vert \boldsymbol{\mu}_s,\boldsymbol{\Sigma}_s) \cdot p(y_i|\Theta = s,\boldsymbol{x_j})\right)\\&=\sum_{(\boldsymbol{x_j},y_j \in D_l)}\ln\left(\alpha_{y_j} \cdot p(\boldsymbol{x_j} \vert \boldsymbol{\mu}_{y_j},\boldsymbol{\Sigma}_{y_j})\right)\\LL(D_u)&=\sum_{\boldsymbol{x_j} \in D_u} \ln\left(\sum_{s=1}^N \alpha_s \cdot p(\boldsymbol{x_j} | \boldsymbol{\mu}_s,\boldsymbol{\Sigma}_s)\right)\end{aligned}
-
 $$
 
 
@@ -167,9 +151,7 @@ $$
 
 
 $$
-
 \begin{aligned}\frac{\partial L L\left(D_{l}\right)}{\partial \boldsymbol{\mu}_{i}} &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{\partial \ln \left(\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)\right)}{\partial \boldsymbol{\mu}_{i}} \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot \frac{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\partial \boldsymbol{\mu}_{i}} \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right) \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\end{aligned}
-
 $$
 
 
@@ -178,12 +160,10 @@ $LL(D_u)$对$\boldsymbol{\mu_i}$求导,参考 9.33 的推导:
 
 
 $$
-
 \begin{aligned}
 \frac{\partial L L\left(D_{u}\right)}{\partial \boldsymbol{\mu}_{i}} &=\sum_{\boldsymbol{x}_{j} \in D_{u}} \frac{\alpha_{i}}{\sum_{s=1}^{N} \alpha_{s} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{s}, \boldsymbol{\Sigma}_{s}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right) \\
 &=\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)
 \end{aligned}
-
 $$
 
 
@@ -192,9 +172,7 @@ $$
 
 
 $$
-
 \begin{aligned}\frac{\partial L L\left(D_{l} \cup D_{u}\right)}{\partial \boldsymbol{\mu}_{i}} &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)+\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right) \\&=\boldsymbol{\Sigma}_{i}^{-1}\left(\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)+\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\right) \\&=\boldsymbol{\Sigma}_{i}^{-1}\left(\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{x}_{j}+\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{x}_{j}-\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{\mu}_{i}-\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{\mu}_{i}\right)\end{aligned}
-
 $$
 
 
@@ -203,9 +181,7 @@ $$
 
 
 $$
-
 \sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{\mu}_{i}+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{\mu}_{i}=\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{x}_{j}+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{x}_{j}
-
 $$
 
 
@@ -216,9 +192,7 @@ $$
 
 
 $$
-
 \left(\sum_{x_{j} \in D_{u}} \gamma_{j i}+\sum_{\left(x_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} 1\right) \boldsymbol{\mu}_{i}=\sum_{x_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{x}_{j}+\sum_{\left(x_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{x}_{j}
-
 $$
 
 
@@ -231,12 +205,10 @@ $$
 
 
 $$
-
 \begin{aligned} \frac{\partial L L\left(D_{l}\right)}{\partial \boldsymbol{\Sigma}_{i}} &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{\partial \ln \left(\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)\right)}{\partial \boldsymbol{\Sigma}_{i}} \\ &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot \frac{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\partial \boldsymbol{\Sigma}_{i}} \\
 &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \cdot\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1}\\
 &=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1}
 \end{aligned}
-
 $$
 
 
@@ -245,9 +217,7 @@ $$
 
 
 $$
-
 \frac{\partial L L\left(D_{u}\right)}{\partial \boldsymbol{\Sigma}_{i}}=\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1}
-
 $$
 
 
@@ -256,9 +226,7 @@ $$
 
 
 $$
-
 \begin{aligned} \frac{\partial L L\left(D_{l} \cup D_{u}\right)}{\partial \boldsymbol{\Sigma}_{i}}=& \sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1} \\ &+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1} \\=&\left(\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right)\right.\\ &\left.+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}-\boldsymbol{I}\right)\right) \cdot \frac{1}{2} \boldsymbol{\Sigma}_{i}^{-1} \end{aligned}
-
 $$
 
 
@@ -267,9 +235,7 @@ $$
 
 
 $$
-
 \begin{aligned} \sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}+& \sum_{\left(\boldsymbol{x}_{j}, y_{j} \in D_{l} \wedge y_{j}=i\right.} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top} \\=& \sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot \boldsymbol{I}+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \boldsymbol{I} \\ &=\left(\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i}+l_{i}\right) \boldsymbol{I} \end{aligned}
-
 $$
 
 
@@ -278,9 +244,7 @@ $$
 
 
 $$
-
 \sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i} \cdot\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}+\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}=\left(\sum_{\boldsymbol{x}_{j} \in D_{u}} \gamma_{j i}+l_{i}\right) \boldsymbol{\Sigma}_{i}
-
 $$
 
 
@@ -293,9 +257,7 @@ $$
 
 
 $$
-
 \begin{aligned}\mathcal{L}\left(D_{l} \cup D_{u}, \lambda\right) &=L L\left(D_{l} \cup D_{u}\right)+\lambda\left(\sum_{s=1}^{N} \alpha_{s}-1\right) \\&=L L\left(D_{l}\right)+L L\left(D_{u}\right)+\lambda\left(\sum_{s=1}^{N} \alpha_{s}-1\right)\end{aligned}
-
 $$
 
 
@@ -304,9 +266,7 @@ $$
 
 
 $$
-
 \frac{\partial L L\left(D_{u}\right)}{\partial \alpha_{i}}=\sum_{\boldsymbol{x}_{j} \in D_{u}} \frac{1}{\sum_{s=1}^{N} \alpha_{s} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{s}, \boldsymbol{\Sigma}_{s}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)
-
 $$
 
 
@@ -315,9 +275,7 @@ $$
 
 
 $$
-
 \begin{aligned}\frac{\partial L L\left(D_{l}\right)}{\partial \alpha_{i}} &=\sum_{\left(x_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{\partial \ln \left(\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)\right)}{\partial \alpha_{i}} \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot \frac{\partial\left(\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)\right)}{\partial \alpha_{i}} \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \\&=\sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} \frac{1}{\alpha_{i}}=\frac{1}{\alpha_{i}} \cdot \sum_{\left(\boldsymbol{x}_{j}, y_{j}\right) \in D_{l} \wedge y_{j}=i} 1=\frac{l_{i}}{\alpha_{i}}\end{aligned}
-
 $$
 
 
@@ -330,9 +288,7 @@ $l_i$ 为第$i$类样本的有标记样本数目。
 
 
 $$
-
 \frac{\partial \mathcal{L}\left(D_{l} \cup D_{u}, \lambda\right)}{\partial \alpha_{i}}=\frac{l_{i}}{\alpha_{i}}+\sum_{\boldsymbol{x}_{j} \in D_{u}} \frac{p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\sum_{s=1}^{N} \alpha_{s} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{s}, \boldsymbol{\Sigma}_{s}\right)}+\lambda
-
 $$
 
 
@@ -342,9 +298,7 @@ $$
 
 
 $$
-
 \alpha_{i} \cdot \frac{l_{i}}{\alpha_{i}}+\sum_{\boldsymbol{x}_{j} \in D_{u}} \frac{\alpha_{i} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\sum_{s=1}^{N} \alpha_{s} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{s}, \boldsymbol{\Sigma}_{s}\right)}+\lambda \cdot \alpha_{i}=0
-
 $$
 
 
@@ -353,9 +307,7 @@ $$
 
 
 $$
-
 l_i+\sum_{x_i \in D_u} \gamma_{ji}+\lambda \alpha_i = 0
-
 $$
 
 
@@ -364,9 +316,7 @@ $$
 
 
 $$
-
 \sum_{i=1}^N l_i+\sum_{i=1}^N  \sum_{x_i \in D_u} \gamma_{ji}+\sum_{i=1}^N \lambda \alpha_i = 0
-
 $$
 
 
@@ -377,9 +327,7 @@ $$
 
 
 $$
-
 \sum_{i=1}^N \gamma_{ji} =  \sum_{i =1}^{N} \cfrac{\alpha_i \cdot  p(x_j|\mu_i,\Sigma_i)}{\Sigma_{s=1}^N \alpha_s \cdot p(x_j| \mu_s, \Sigma_s)}=  \cfrac{\sum_{i =1}^{N}\alpha_i \cdot  p(x_j|\mu_i,\Sigma_i)}{\sum_{s=1}^N \alpha_s \cdot p(x_j| \mu_s, \Sigma_s)}=1
-
 $$
 
 
@@ -388,9 +336,7 @@ $$
 
 
 $$
-
 \sum_{i=1}^N  \sum_{x_i \in D_u} \gamma_{ji}=\sum_{x_i \in D_u} \sum_{i=1}^N  \gamma_{ji} =\sum_{x_i \in D_u} 1=u
-
 $$
 
 
@@ -401,9 +347,7 @@ $\sum_{i=1}^Nl_i=l$其中$l$为有标记样本集的样本个数;将这些结
 
 
 $$
-
 \sum_{i=1}^N l_i+\sum_{i=1}^N  \sum_{x_i \in D_u} \gamma_{ji}+\sum_{i=1}^N \lambda \alpha_i = 0
-
 $$
 
 
@@ -413,9 +357,7 @@ $$
 
 
 $$
-
 l_i + \sum_{x_j \in{D_u}} \gamma_{ji}-\lambda \alpha_i = 0
-
 $$
 
 
@@ -517,13 +459,11 @@ $E(f)$ 越小的目的。 首先对式(13.12)的第 1 行式子进行展开整
 
 
 $$
-
 \begin{aligned}
 E(f) & =\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j}\left(f\left(\boldsymbol{x}_i\right)-f\left(\boldsymbol{x}_j\right)\right)^2 \\
 & =\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j}\left(f^2\left(\boldsymbol{x}_i\right)-2 f\left(\boldsymbol{x}_i\right) f\left(\boldsymbol{x}_j\right)+f^2\left(\boldsymbol{x}_j\right)\right) \\
 & =\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f^2\left(\boldsymbol{x}_i\right)+\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f^2\left(\boldsymbol{x}_j\right)-\sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f\left(\boldsymbol{x}_i\right) f\left(\boldsymbol{x}_j\right)
 \end{aligned}
-
 $$
 
 
@@ -533,12 +473,10 @@ $\sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f^2\left(\boldsymbol{x}_i\right)=\s
 并变形: 
 
 $$
-
 \begin{aligned}
 \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f^2\left(\boldsymbol{x}_j\right) & =\sum_{j=1}^m \sum_{i=1}^m(\mathbf{W})_{j i} f^2\left(\boldsymbol{x}_i\right)=\sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f^2\left(\boldsymbol{x}_i\right) \\
 & =\sum_{i=1}^m f^2\left(\boldsymbol{x}_i\right) \sum_{j=1}^m(\mathbf{W})_{i j}
 \end{aligned}
-
 $$
 
 
@@ -558,9 +496,7 @@ $d_i=\sum_{j=1}^m(\mathbf{W})_{j i}$, 即第$i$列元素之和), 则
 
 
 $$
-
 E(f)=\sum_{i=1}^m d_i f^2\left(\boldsymbol{x}_i\right)-\sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f\left(\boldsymbol{x}_i\right) f\left(\boldsymbol{x}_j\right)
-
 $$
 
 
@@ -570,12 +506,10 @@ $\sum_{i=1}^m d_i f^2\left(\boldsymbol{x}_i\right)$
 可以写为如下矩阵形式: 
 
 $$
-
 \begin{aligned}
 & =\boldsymbol{f}^{\mathrm{T}} \boldsymbol{D} \boldsymbol{f} \\
 &
 \end{aligned}
-
 $$
 
 
@@ -585,7 +519,6 @@ $\sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f\left(\boldsymbol{x}_i\right) f\le
 也可以写为如下矩阵形式: 
 
 $$
-
 \begin{aligned}
 & \sum_{i=1}^m \sum_{j=1}^m(\mathbf{W})_{i j} f\left(\boldsymbol{x}_i\right) f\left(\boldsymbol{x}_j\right) \\
 & =\left[\begin{array}{llll}
@@ -603,7 +536,6 @@ f\left(\boldsymbol{x}_m\right)
 \end{array}\right] \\
 & =\boldsymbol{f}^{\mathrm{T}} \boldsymbol{W} \boldsymbol{f}
 \end{aligned}
-
 $$
 
 
@@ -637,7 +569,6 @@ $f\left(\boldsymbol{x}_i\right)$ 是待求变 量且应该使 $E(f)$ 最小,
 
 
 $$
-
 \begin{aligned}
 E(f) &=\left[\begin{array}{ll}
 \boldsymbol{f}_{l}^{\mathrm{T}} & \boldsymbol{f}_{u}^{\mathrm{T}}
@@ -656,7 +587,6 @@ E(f) &=\left[\begin{array}{ll}
 &=\boldsymbol{f}_{l}^{\mathrm{T}}\left(\boldsymbol{D}_{l l}-\boldsymbol{W}_{l l}\right) \boldsymbol{f}_{l}-\boldsymbol{f}_{u}^{\mathrm{T}} \boldsymbol{W}_{u l} \boldsymbol{f}_{l}-\boldsymbol{f}_{l}^{\mathrm{T}} \boldsymbol{W}_{l u} \boldsymbol{f}_{u}+\boldsymbol{f}_{u}^{\mathrm{T}}\left(\boldsymbol{D}_{u u}-\boldsymbol{W}_{u u}\right) \boldsymbol{f}_{u} \\
 &=\boldsymbol{f}_{l}^{\mathrm{T}}\left(\boldsymbol{D}_{l l}-\boldsymbol{W}_{l l}\right) \boldsymbol{f}_{l}-2 \boldsymbol{f}_{u}^{\mathrm{T}} \boldsymbol{W}_{u l} \boldsymbol{f}_{l}+\boldsymbol{f}_{u}^{\mathrm{T}}\left(\boldsymbol{D}_{u u}-\boldsymbol{W}_{u u}\right) \boldsymbol{f}_{u}
 \end{aligned}
-
 $$
 
 
@@ -668,12 +598,10 @@ $$
 首先,基于式(13.14)对$\boldsymbol{f}_u$求导: 
 
 $$
-
 \begin{aligned}
 \frac{\partial E(f)}{\partial \boldsymbol{f}_{u}} &=\frac{\partial \boldsymbol{f}_{l}^{\mathrm{T}}\left(\boldsymbol{D}_{l l}-\boldsymbol{W}_{l l}\right) \boldsymbol{f}_{l}-2 \boldsymbol{f}_{u}^{\mathrm{T}} \boldsymbol{W}_{u l} \boldsymbol{f}_{l}+\boldsymbol{f}_{u}^{\mathrm{T}}\left(\boldsymbol{D}_{u u}-\boldsymbol{W}_{u u}\right) \boldsymbol{f}_{u}}{\partial \boldsymbol{f}_{u}} \\
 &=-2 \boldsymbol{W}_{u l} \boldsymbol{f}_{l}+2\left(\boldsymbol{D}_{u u}-\boldsymbol{W}_{u u}\right) \boldsymbol{f}_{u}
 \end{aligned}
-
 $$
 
  令结果等于 0 即得 13.15。
@@ -705,12 +633,10 @@ $\boldsymbol{f}_l$ 即函数 $f$ 在有标记样本上的预测结果 (即已知
 第一项到第二项是根据矩阵乘法逆的定义:$(\mathbf{A}\mathbf{B})^{-1}=\mathbf{B}^{-1}\mathbf{A}^{-1}$,在这个式子中
 
 $$
-
 \begin{aligned}
 \mathbf{P}_{u u}&=\mathbf{D}_{u u}^{-1} \mathbf{W}_{u u}\\
 \mathbf{P}_{ul}&=\mathbf{D}_{u u}^{-1} \mathbf{W}_{u l}
 \end{aligned}
-
 $$
 
 均可以根据$\mathbf{W}_{ij}$计算得到,因此可以通过标记$\mathbf{f}_l$计算未标记数据的标签$\mathbf{f}_u$。
@@ -728,9 +654,7 @@ $|\mathcal{Y}|$ 表示集合 $\mathcal{Y}$ 的势, 即包含元素 (类别) 的
 
 
 $$
-
 \mathbf{F}^{*}=\lim _{t \rightarrow \infty} \mathbf{F}(t)=(1-\alpha)(\mathbf{I}-\alpha \mathbf{S})^{-1} \mathbf{Y}
-
 $$
 
 
@@ -738,16 +662,13 @@ $$
 
 
 $$
-
 \mathbf{F}(t+1)=\alpha \mathbf{S} \mathbf{F}(t)+(1-\alpha) \mathbf{Y}
-
 $$
 
 
 当 t取不同的值时,有: 
 
 $$
-
 \begin{aligned}
 t=0: \mathbf{F}(1) &=\alpha \mathbf{S F}(0)+(1-\alpha) \mathbf{Y}\\
 &=\alpha \mathbf{S} \mathbf{Y}+(1-\alpha) \mathbf{Y} \\
@@ -756,16 +677,13 @@ t=1: \mathbf{F}(2) &=\alpha \mathbf{S F}(1)+(1-\alpha) \mathbf{Y}=\alpha \mathbf
 t=2:\mathbf{F}(3)&=\alpha\mathbf{S}\mathbf{F}(2)+(1-\alpha)\mathbf{Y}\\&=\alpha \mathbf{S}\left((\alpha \mathbf{S})^{2} \mathbf{Y}+(1-\alpha)\left(\sum_{i=0}^{1}(\alpha \mathbf{S})^{i}\right) \mathbf{Y}\right)+(1-\alpha) \mathbf{Y} \\
 &=(\alpha \mathbf{S})^{3} \mathbf{Y}+(1-\alpha)\left(\sum_{i=0}^{2}(\alpha \mathbf{S})^{i}\right) \mathbf{Y}\\
 \end{aligned}
-
 $$
 
  可以观察到规律
 
 
 $$
-
 \mathbf{F}(t)=(\alpha \mathbf{S})^{t} \mathbf{Y}+(1-\alpha)\left(\sum_{i=0}^{t-1}(\alpha \mathbf{S})^{i}\right) \mathbf{Y}
-
 $$
 
 
@@ -773,9 +691,7 @@ $$
 
 
 $$
-
 \mathbf{F}^{*}=\lim _{t \rightarrow \infty}\mathbf{F}(t)=\lim _{t \rightarrow \infty}(\alpha \mathbf{S})^{t} \mathbf{Y}+\lim _{t \rightarrow \infty}(1-\alpha)\left(\sum_{i=0}^{t-1}(\alpha \mathbf{S})^{i}\right) \mathbf{Y}
-
 $$
 
 
@@ -784,9 +700,7 @@ $$
 
 
 $$
-
 \lim _{t \rightarrow \infty} \sum_{i=0}^{t-1}(\alpha \mathbf{S})^{i}=\frac{\mathbf{I}-\lim _{t \rightarrow \infty}(\alpha \mathbf{S})^{t}}{\mathbf{I}-\alpha \mathbf{S}}=\frac{\mathbf{I}}{\mathbf{I}-\alpha \mathbf{S}}=(\mathbf{I}-\alpha \mathbf{S})^{-1}
-
 $$
 
 
@@ -801,12 +715,10 @@ $$
 先将范数平方拆开为四项 
 
 $$
-
 \begin{aligned}
 \left\|\frac{1}{\sqrt{d_i}} \mathbf{F}_i-\frac{1}{\sqrt{d_j}} \mathbf{F}_j\right\|^2 & =\left(\frac{1}{\sqrt{d_i}} \mathbf{F}_i-\frac{1}{\sqrt{d_j}} \mathbf{F}_j\right)\left(\frac{1}{\sqrt{d_i}} \mathbf{F}_i-\frac{1}{\sqrt{d_j}} \mathbf{F}_j\right)^{\top} \\
 & =\frac{1}{d_i} \mathbf{F}_i \mathbf{F}_i^{\top}+\frac{1}{d_j} \mathbf{F}_j \mathbf{F}_j^{\top}-\frac{1}{\sqrt{d_i d_j}} \mathbf{F}_i \mathbf{F}_j^{\top}-\frac{1}{\sqrt{d_j d_i}} \mathbf{F}_j \mathbf{F}_i^{\top}
 \end{aligned}
-
 $$
 
 
@@ -818,13 +730,11 @@ $\sum_{i=1}^m \sum_{i=1}^m$ 的形式, 并将上面拆分的四项中的前两
 
 
 $$
-
 \begin{aligned}
 & \sum_{i, j=1}^m(\mathbf{W})_{i j} \frac{1}{d_i} \mathbf{F}_i \mathbf{F}_i^{\top}=\sum_{i=1}^m \frac{1}{d_i} \mathbf{F}_i \mathbf{F}_i^{\top} \sum_{j=1}^m(\mathbf{W})_{i j}=\sum_{i=1}^m \frac{1}{d_i} \mathbf{F}_i \mathbf{F}_i^{\top} \cdot d_i=\sum_{i=1}^m \mathbf{F}_i \mathbf{F}_i^{\top} \\
 & \sum_{i, j=1}^m(\mathbf{W})_{i j} \frac{1}{d_j} \mathbf{F}_j \mathbf{F}_j^{\top}=\sum_{j=1}^m \frac{1}{d_j} \mathbf{F}_j \mathbf{F}_j^{\top} \sum_{i=1}^m(\mathbf{W})_{i j}=\sum_{j=1}^m \frac{1}{d_j} \mathbf{F}_j \mathbf{F}_j^{\top} \cdot d_j=\sum_{j=1}^m \mathbf{F}_j \mathbf{F}_j^{\top} \\
 &
 \end{aligned}
-
 $$
 
 
@@ -836,9 +746,7 @@ $d_i=\sum_{j=1}^m(\mathbf{W})_{i j}=\sum_{j=1}^m(\mathbf{W})_{j i}$
 
 
 $$
-
 \sum_{i=1}^m \mathbf{F}_i \mathbf{F}_i^{\top}=\sum_{j=1}^m \mathbf{F}_j \mathbf{F}_j^{\top}=\sum_{i=1}^m\left\|\mathbf{F}_i\right\|^2=\|\mathbf{F}\|_{\mathrm{F}}^2=\operatorname{tr}\left(\mathbf{F} \mathbf{F}^{\top}\right)
-
 $$
 
 
@@ -854,9 +762,7 @@ $\left\|\mathbf{F}_i\right\|^2$ 形式; 从第 2
 
 
 $$
-
 \sum_{i, j=1}^m(\mathbf{W})_{i j} \frac{1}{\sqrt{d_i d_j}} \mathbf{F}_i \mathbf{F}_j^{\top}=\sum_{i, j=1}^m(\mathbf{S})_{i j} \mathbf{F}_i \mathbf{F}_j^{\top}=\operatorname{tr}\left(\mathbf{S}^{\top} \mathbf{F} \mathbf{F}^{\top}\right)=\operatorname{tr}\left(\mathbf{S F} \mathbf{F}^{\top}\right)
-
 $$
 
 
@@ -864,7 +770,6 @@ $$
 具体来说, 以上化简过程为: 
 
 $$
-
 \begin{aligned}
 & \mathbf{S}=\left[\begin{array}{cccc}
 (\mathbf{S})_{11} & (\mathbf{S})_{12} & \cdots & (\mathbf{S})_{1 m} \\
@@ -891,7 +796,6 @@ $$
 \end{array}\right] \\
 &
 \end{aligned}
-
 $$
 
 
@@ -901,7 +805,6 @@ $(\mathbf{S})_{i j}=\frac{1}{\sqrt{d_i d_j}}(\mathbf{W})_{i j}$, 即第 1
 个等号; 而 
 
 $$
-
 \mathbf{F F}^{\top}=\left[\begin{array}{c}
 \mathbf{F}_1 \\
 \mathbf{F}_2 \\
@@ -915,7 +818,6 @@ $$
 \vdots & \vdots & \ddots & \vdots \\
 \mathbf{F}_m \mathbf{F}_1^{\top} & \mathbf{F}_m \mathbf{F}_2^{\top} & \cdots & \mathbf{F}_m \mathbf{F}_m^{\top}
 \end{array}\right]
-
 $$
 
 
@@ -926,9 +828,7 @@ Hadmard 积, 即矩阵 $\mathbf{S}$ 与矩阵 $\mathbf{F} \mathbf{F}^{\top}$
 
 
 $$
-
 \sum_{i, j=1}^m(\mathbf{S})_{i j} \mathbf{F}_i \mathbf{F}_j^{\top}=\sum_{i, j=1}^m(\mathbf{A})_{i j}
-
 $$
 
 
@@ -940,7 +840,6 @@ $\operatorname{tr}\left(\mathbf{S}^{\top} \mathbf{F F}^{\top}\right)$,
 这是因为 
 
 $$
-
 \begin{aligned}
 & \operatorname{tr}\left(\left[\begin{array}{cccc}
 (\mathbf{S})_{11} & (\mathbf{S})_{12} & \cdots & (\mathbf{S})_{1 m} \\
@@ -987,7 +886,6 @@ $$
 &=\sum_{i=1}^m(\mathbf{S})_{i 1} \mathbf{F}_i \mathbf{F}_1^{\top}+\sum_{i=1}^m(\mathbf{S})_{i 2} \mathbf{F}_i \mathbf{F}_2^{\top}+\ldots+\sum_{i=1}(\mathbf{S})_{i m} \mathbf{F}_i \mathbf{F}_m^{\top} \\
 &= \sum_{i, j=1}^m(\mathbf{S})_{i j} \mathbf{F}_i \mathbf{F}_j^{\top}
 \end{aligned}
-
 $$
 
 
@@ -999,9 +897,7 @@ $\mathbf{F}_i \mathbf{F}_j^{\top}$ 是一个 数 (即大小为 $1 \times 1$
 
 
 $$
-
 \mathbf{F}_i \mathbf{F}_j^{\top}=\left(\mathbf{F}_i \mathbf{F}_j^{\top}\right)^{\top}=\left(\mathbf{F}_j^{\top}\right)^{\top}\left(\mathbf{F}_i\right)^{\top}=\mathbf{F}_j \mathbf{F}_i^{\top}
-
 $$
 
 
@@ -1010,9 +906,7 @@ $$
 
 
 $$
-
 \frac{1}{\sqrt{d_i d_j}} \mathbf{F}_i \mathbf{F}_j^{\top}=\frac{1}{\sqrt{d_j d_i}} \mathbf{F}_j \mathbf{F}_i^{\top}
-
 $$
 
 
@@ -1021,9 +915,7 @@ $$
 
 
 $$
-
 \sum_{i, j=1}^m(\mathbf{W})_{i j} \frac{1}{\sqrt{d_i d_j}} \mathbf{F}_i \mathbf{F}_j^{\top}=\sum_{i, j=1}^m(\mathbf{W})_{i j} \frac{1}{\sqrt{d_j d_i}} \mathbf{F}_j \mathbf{F}_i^{\top}
-
 $$
 
 
@@ -1033,9 +925,7 @@ $\frac{1}{2}$ ):
 
 
 $$
-
 \frac{1}{2}\left(\sum_{i, j=1}^m(\mathbf{W})_{i j}\left\|\frac{1}{\sqrt{d_i}} \mathbf{F}_i-\frac{1}{\sqrt{d_j}} \mathbf{F}_j\right\|^2\right)=\operatorname{tr}\left(\mathbf{F F}^{\top}\right)-\operatorname{tr}\left(\mathbf{S F F}^{\top}\right)
-
 $$
 
 
@@ -1047,9 +937,7 @@ $$
 
 
 $$
-
 \mathcal{Q}(F)=\frac{1}{2} \sum_{i, j=1}^n W_{i j}\left\|\frac{F_i}{\sqrt{D_{i i}}}-\frac{F_j}{\sqrt{D_{j j}}}\right\|^2+\mu \sum_{i=1}^n\left\|F_i-Y_i\right\|^2,
-
 $$
 
 
@@ -1069,9 +957,7 @@ $\mu \sum_{i=l+1}^{l+u}\left\|\mathbf{F}_i\right\|^2$, 式(13.21)
 
 
 $$
-
 \sum_{i=1}^m\left\|\mathbf{F}_i-\mathbf{Y}_i\right\|^2=\|\mathbf{F}-\mathbf{Y}\|_{\mathrm{F}}^2
-
 $$
 
 
@@ -1081,12 +967,10 @@ $\mathcal{Q}(\mathbf{F})=\operatorname{tr}\left(\mathbf{F} \mathbf{F}^{\top}\rig
 求导: 
 
 $$
-
 \begin{aligned}
 \frac{\partial \mathcal{Q}(\mathbf{F})}{\partial \mathbf{F}} & =\frac{\partial \operatorname{tr}\left(\mathbf{F} \mathbf{F}^{\top}\right)}{\partial \mathbf{F}}-\frac{\partial \operatorname{tr}\left(\mathbf{S} \mathbf{F} \mathbf{F}^{\top}\right)}{\partial \mathbf{F}}+\mu \frac{\partial\|\mathbf{F}-\mathbf{Y}\|_{\mathrm{F}}^2}{\partial \mathbf{F}} \\
 & =2 \mathbf{F}-2 \mathbf{S} \mathbf{F}+2 \mu(\mathbf{F}-\mathbf{Y})
 \end{aligned}
-
 $$
 
  令 $\mu=\frac{1-\alpha}{\alpha}$, 并令

+ 0 - 88
docs/chapter14/chapter14.md

@@ -32,9 +32,7 @@ $y$ 的概率, 即根据 $\boldsymbol{x}$ "判别" $y$, 因此称为 "判别式
 
 
 $$
-
 P\left(x_1, y_1, \ldots, x_n, y_n\right)=P\left(x_1, \ldots, x_n \mid y_1, \ldots, y_n\right) \cdot P\left(y_1, \ldots, y_n\right)
-
 $$
 
 
@@ -43,14 +41,12 @@ $$
 
 
 $$
-
 \begin{aligned}
 P\left(y_1, \ldots, y_n\right) & =P\left(y_n \mid y_1, \ldots, y_{n-1}\right) \cdot P\left(y_1, \ldots, y_{n-1}\right) \\
 & =P\left(y_n \mid y_1, \ldots, y_{n-1}\right) \cdot P\left(y_{n-1} \mid y_1, \ldots, y_{n-2}\right) \cdot P\left(y_1, \ldots, y_{n-2}\right) \\
 & =\ldots \ldots \\
 & =P\left(y_n \mid y_1, \ldots, y_{n-1}\right) \cdot P\left(y_{n-1} \mid y_1, \ldots, y_{n-2}\right) \cdot \ldots \cdot P\left(y_2 \mid y_1\right) \cdot P\left(y_1\right)
 \end{aligned}
-
 $$
 
 
@@ -59,13 +55,11 @@ $$
 决定; 基于这种依赖关系, 有 
 
 $$
-
 \begin{aligned}
 P\left(y_n \mid y_1, \ldots, y_{n-1}\right) & =P\left(y_n \mid y_{n-1}\right) \\
 P\left(y_{n-1} \mid y_1, \ldots, y_{n-2}\right) & =P\left(y_{n-1} \mid y_{n-2}\right) \\
 P\left(y_{n-2} \mid y_1, \ldots, y_{n-3}\right) & =P\left(y_{n-2} \mid y_{n-3}\right)
 \end{aligned}
-
 $$
 
 
@@ -73,12 +67,10 @@ $$
 因此 $P\left(y_1, \ldots, y_n\right)$ 可化简为 
 
 $$
-
 \begin{aligned}
 P\left(y_1, \ldots, y_n\right) & =P\left(y_n \mid y_{n-1}\right) \cdot P\left(y_{n-1} \mid y_{n-2}\right) \cdot \ldots \cdot P\left(y_2 \mid y_1\right) \cdot P\left(y_1\right) \\
 & =P\left(y_1\right) \prod_{i=2}^n P\left(y_i \mid y_{i-1}\right)
 \end{aligned}
-
 $$
 
 
@@ -88,13 +80,11 @@ $$
 与其它状态变量及观测变量的取值无关。因此 
 
 $$
-
 \begin{aligned}
 P\left(x_1, \ldots, x_n \mid y_1, \ldots, y_n\right) & =P\left(x_1 \mid y_1, \ldots, y_n\right) \cdot \ldots \cdot P\left(x_n \mid y_1, \ldots, y_n\right) \\
 & =P\left(x_1 \mid y_1\right) \cdot \ldots \cdot P\left(x_n \mid y_n\right) \\
 & =\prod_{i=1}^n P\left(x_i \mid y_i\right)
 \end{aligned}
-
 $$
 
 
@@ -102,13 +92,11 @@ $$
 综上所述, 可得 
 
 $$
-
 \begin{aligned}
 P\left(x_1, y_1, \ldots, x_n, y_n\right) & =P\left(x_1, \ldots, x_n \mid y_1, \ldots, y_n\right) \cdot P\left(y_1, \ldots, y_n\right) \\
 & =\left(\prod_{i=1}^n P\left(x_i \mid y_i\right)\right) \cdot\left(P\left(y_1\right) \prod_{i=2}^n P\left(y_i \mid y_{i-1}\right)\right) \\
 & =P\left(y_1\right) P\left(x_1 \mid y_1\right) \prod_{i=2}^n P\left(y_i \mid y_{i-1}\right) P\left(x_i \mid y_i\right)
 \end{aligned}
-
 $$
 
 
@@ -150,9 +138,7 @@ $\psi_{A C}\left(x_A^{\prime}, x_C\right)$ 与变量 $x_B^{\prime}$ 无关,
 
 
 $$
-
 \sum_{x_A^{\prime}} \sum_{x_B^{\prime}} \psi_{A C}\left(x_A^{\prime}, x_C\right) \psi_{B C}\left(x_B^{\prime}, x_C\right)=\sum_{x_A^{\prime}} \psi_{A C}\left(x_A^{\prime}, x_C\right) \sum_{x_B^{\prime}} \psi_{B C}\left(x_B^{\prime}, x_C\right)
-
 $$
 
 
@@ -162,13 +148,11 @@ $\mathbf{x}=\left\{x_1, x_2, x_3\right\}, \mathbf{y}=\left\{y_1, y_2, y_3\right\
 
 $$
-
 \begin{aligned}
 \sum_{i=1}^3 \sum_{j=1}^3 x_i y_j & =x_1 y_1+x_1 y_2+x_1 y_3+x_2 y_1+x_2 y_2+x_2 y_3+x_3 y_1+x_3 y_2+x_3 y_3 \\
 & =x_1 \times\left(y_1+y_2+y_3\right)+x_2 \times\left(y_1+y_2+y_3\right)+x_3 \times\left(y_1+y_2+y_3\right) \\
 & =\left(x_1+x_2+x_3\right) \times\left(y_1+y_2+y_3\right)=\left(\sum_{i=1}^3 x_i\right)\left(\sum_{j=1}^3 y_j\right)
 \end{aligned}
-
 $$
 
 
@@ -221,9 +205,7 @@ $P(\mathbf{y} \mid \mathbf{x})$, 因此它 是一种判别式模型, 参见"西
 
 
 $$
-
 P\left(y_{v} | \mathbf{x}, \mathbf{y}_{V \backslash\{v\}}\right)=P\left(y_{v} | \mathbf{x}, \mathbf{y}_{n(v)}\right)
-
 $$
 
 
@@ -264,14 +246,12 @@ $$
 请一定理解并记住其含义。依次推导如下: 
 
 $$
-
 \begin{aligned}
 & m_{12}\left(x_2\right)=\sum_{x_1} P\left(x_1\right) P\left(x_2 \mid x_1\right)=\sum_{x_1} P\left(x_2, x_1\right)=P\left(x_2\right) \\
 & m_{23}\left(x_3\right)=\sum_{x_2} P\left(x_3 \mid x_2\right) m_{12}\left(x_2\right)=\sum_{x_2} P\left(x_3, x_2\right)=P\left(x_3\right) \\
 & \left.m_{43}\left(x_3\right)=\sum_{x_4} P\left(x_4 \mid x_3\right) m_{23}\left(x_3\right)=\sum_{x_4} P\left(x_4, x_3\right)=P\left(x_3\right) \text { (这里与书中不一样 }\right) \\
 & m_{35}\left(x_5\right)=\sum_{x_3} P\left(x_5 \mid x_3\right) m_{43}\left(x_3\right)=\sum_{x_3} P\left(x_5, x_3\right)=P\left(x_5\right)
 \end{aligned}
-
 $$
 
  注意: 这里的过程与"西瓜书"中不太一样, 但本质一样, 因为
@@ -313,14 +293,12 @@ $n(3) \backslash 5=\{2,4\}$ (因为 $x_3$ 有邻接结点 2,4 和 5 )。
 接下来, 仍然以图14.7 计算 $P\left(x_5\right)$ 为例: 
 
 $$
-
 \begin{aligned}
 & m_{12}\left(x_2\right)=\sum_{x_1} \psi_{12}\left(x_1, x_2\right) \prod_{k \in n(1) \backslash 2} m_{k 1}\left(x_1\right)=\sum_{x_1} \psi_{12}\left(x_1, x_2\right) \\
 & m_{23}\left(x_3\right)=\sum_{x_2} \psi_{23}\left(x_2, x_3\right) \prod_{k \in n(2) \backslash 3} m_{k 2}\left(x_2\right)=\sum_{x_1} \psi_{12}\left(x_1, x_2\right) m_{12}\left(x_2\right) \\
 & m_{43}\left(x_3\right)=\sum_{x_4} \psi_{34}\left(x_3, x_4\right) \prod_{k \in n(4) \backslash 3} m_{k 4}\left(x_4\right)=\sum_{x_4} \psi_{34}\left(x_3, x_4\right) \\
 & m_{35}\left(x_5\right)=\sum_{x_3} \psi_{35}\left(x_3, x_5\right) \prod_{k \in n(3) \backslash 5} m_{k 3}\left(x_3\right)=\sum_{x_3} \psi_{35}\left(x_3, x_5\right) m_{23}\left(x_3\right) m_{43}\left(x_3\right)
 \end{aligned}
-
 $$
 
 
@@ -337,14 +315,12 @@ $$
 
 
 $$
-
 \begin{aligned}
 \hat{f}&=\frac{1}{N} \sum_{j=1}^{M} f\left(x_{j}\right) \cdot m_j \\
 &= \sum_{j=1}^{M} f\left(x_{j}\right)\cdot \frac{m_j}{N} \\
 &\approx \sum_{j=1}^{M} f\left(x_{j}\right)\cdot p(x_j)  \\
 &\approx \int f(x) p(x) dx
 \end{aligned}
-
 $$
 
 
@@ -403,11 +379,9 @@ $x_3$ 之间还有很多个结点呢?
 使得 
 
 $$
-
 \begin{aligned}
 \boldsymbol{\pi} \mathbf{T}=\boldsymbol{\pi}
 \end{aligned}
-
 $$
 
  其中,
@@ -419,22 +393,18 @@ $\boldsymbol{\pi}$是一个是一个$n$维向量,代表$s_1,s_2,..,s_n$对应
 事实上,转移矩阵只需要满足马尔可夫细致平稳条件 
 
 $$
-
 \begin{aligned}
 \pi_i \mathbf{T}_{ij}=\pi_j \mathbf{T}_{ji}
 \end{aligned}
-
 $$
 
  即式(14.26),这里采用的符号与西瓜书略有区别以便于理解.
 证明如下 
 
 $$
-
 \begin{aligned}
 \boldsymbol{\pi} \mathbf{T}_{j\cdot} = \sum _i \pi_i\mathbf{T}_{ij} = \sum _i \pi_j\mathbf{T}_{ji} = \pi_j
 \end{aligned}
-
 $$
 
 
@@ -467,14 +437,12 @@ $A\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)$ 的乘积表示。
 式(14.27)等号左边将变为: 
 
 $$
-
 \begin{aligned}
 & p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right) A\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right) \\
 = & p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right) \min \left(1, \frac{p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)}{p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)}\right) \\
 = & \min \left(p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right), p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right) \frac{p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)}{p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)}\right) \\
 = & \min \left(p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right), p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)\right)
 \end{aligned}
-
 $$
 
 
@@ -487,9 +455,7 @@ $\mathbf{x}^{t-1}$ 和 $\mathbf{x}^*$ 调换位置), 同理可得如上结果,
 
 
 $$
-
 A\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)=C \cdot p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)
-
 $$
 
 
@@ -520,9 +486,7 @@ $\frac{1}{p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-
 
 
 $$
-
 C=\min \left(\frac{1}{\cdot p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)}, \frac{1}{p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)}\right)
-
 $$
 
 
@@ -575,9 +539,7 @@ $\mathbf{x}_{\bar{i}}^*=\mathbf{x}_{\bar{i}}^{t-1}$ )
 
 
 $$
-
 \frac{p\left(\mathbf{x}^*\right) Q\left(\mathbf{x}^{t-1} \mid \mathbf{x}^*\right)}{p\left(\mathbf{x}^{t-1}\right) Q\left(\mathbf{x}^* \mid \mathbf{x}^{t-1}\right)}=\frac{p\left(x_i^* \mid \mathbf{x}_i^*\right) p\left(\mathbf{x}_i^*\right) p\left(x_i^{t-1} \mid \mathbf{x}_{\bar{i}}^*\right)}{p\left(x_i^{t-1} \mid \mathbf{x}_{\bar{i}}^{t-1}\right) p\left(\mathbf{x}_{\bar{i}}^{t-1}\right) p\left(x_i^* \mid \mathbf{x}_{\bar{i}}^{t-1}\right)}=1
-
 $$
 
 
@@ -620,9 +582,7 @@ $p(\mathbf{x}, \mathbf{z})=p(\mathbf{z} \mid \mathbf{x}) p(\mathbf{x})$,
 
 
 $$
-
 p(\mathbf{x})=\frac{p(\mathbf{x}, \mathbf{z})}{p(\mathbf{z} \mid \mathbf{x})}
-
 $$
 
 
@@ -631,9 +591,7 @@ $$
 
 
 $$
-
 p(\mathbf{x})=\frac{p(\mathbf{x}, \mathbf{z}) / q(\mathbf{z})}{p(\mathbf{z} \mid \mathbf{x}) / q(\mathbf{z})}
-
 $$
 
 
@@ -642,9 +600,7 @@ $$
 
 
 $$
-
 \ln p(\mathbf{x})=\ln \frac{p(\mathbf{x}, \mathbf{z}) / q(\mathbf{z})}{p(\mathbf{z} \mid \mathbf{x}) / q(\mathbf{z})}=\ln \frac{p(\mathbf{x}, \mathbf{z})}{q(\mathbf{z})}-\ln \frac{p(\mathbf{z} \mid \mathbf{x})}{q(\mathbf{z})}
-
 $$
 
 
@@ -653,9 +609,7 @@ $$
 
 
 $$
-
 \int q(\mathbf{z}) \ln p(\mathbf{x}) \mathrm{d} \mathbf{z}=\int q(\mathbf{z}) \ln \frac{p(\mathbf{x}, \mathbf{z})}{q(\mathbf{z})} \mathrm{d} \mathbf{z}-\int q(\mathbf{z}) \ln \frac{p(\mathbf{z} \mid \mathbf{x})}{q(\mathbf{z})} \mathrm{d} \mathbf{z}
-
 $$
 
 
@@ -665,9 +619,7 @@ $$
 
 
 $$
-
 \int q(\mathbf{z}) \ln p(\mathbf{x}) \mathrm{d} \mathbf{z}=\ln p(\mathbf{x}) \int q(\mathbf{z}) \mathrm{d} \mathbf{z}=\ln p(\mathbf{x})
-
 $$
 
 
@@ -677,9 +629,7 @@ $$
 
 
 $$
-
 \ln p(\mathbf{x})=\int q(\mathbf{z}) \ln \frac{p(\mathbf{x}, \mathbf{z})}{q(\mathbf{z})} \mathrm{d} \mathbf{z}-\int q(\mathbf{z}) \ln \frac{p(\mathbf{z} \mid \mathbf{x})}{q(\mathbf{z})} \mathrm{d} \mathbf{z}
-
 $$
 
 
@@ -693,9 +643,7 @@ $p(\mathbf{z} \mid \mathbf{x})$, 而 $\mathrm{KL}$
 
 
 $$
-
 \min _{q(\mathbf{z})} \operatorname{KL}(q(\mathbf{z}) \| p(\mathbf{z} \mid \mathbf{x}))
-
 $$
 
 
@@ -721,13 +669,11 @@ $q(\mathbf{z})=\prod_{i=1}^M q_i\left(\mathbf{z}_i\right)$,
 将式(14.35)代入式(14.33), 得: 
 
 $$
-
 \begin{aligned}
 \mathcal{L}(q) & =\int q(\mathbf{z}) \ln \frac{p(\mathbf{x}, \mathbf{z})}{q(\mathbf{z})} \mathrm{d} \mathbf{z}=\int q(\mathbf{z})\{\ln p(\mathbf{x}, \mathbf{z})-\ln q(\mathbf{z})\} \mathrm{d} \mathbf{z} \\
 & =\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right)\left\{\ln p(\mathbf{x}, \mathbf{z})-\ln \prod_{i=1}^M q_i\left(\mathbf{z}_i\right)\right\} \mathrm{d} \mathbf{z} \\
 & =\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \ln p(\mathbf{x}, \mathbf{z}) \mathrm{d} \mathbf{z}-\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \ln \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z} \triangleq \mathcal{L}_1(q)-\mathcal{L}_2(q)
 \end{aligned}
-
 $$
 
 
@@ -737,9 +683,7 @@ $Q(\mathbf{x}, \mathbf{z})$, 则上式可变形为:
 
 
 $$
-
 \mathcal{L}(q)=\int Q(\mathbf{x}, \mathbf{z}) \mathrm{d} \mathbf{z}=\int \cdots \int Q(\mathbf{x}, \mathbf{z}) \mathrm{d} \mathbf{z}_1 \mathrm{~d} \mathbf{z}_2 \cdots \mathrm{d} \mathbf{z}_M
-
 $$
 
 
@@ -750,9 +694,7 @@ $$
 
 
 $$
-
 \mathcal{L}_1(q)=\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \ln p(\mathbf{x}, \mathbf{z}) \mathrm{d} \mathbf{z}=\int q_j\left\{\int \ln p(\mathbf{x}, \mathbf{z}) \prod_{i \neq j}^M\left(q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z}_i\right)\right\} \mathrm{d} \mathbf{z}_j
-
 $$
 
 
@@ -763,9 +705,7 @@ $\ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right)=\int \ln p(\mathbf{x}, \math
 
 
 $$
-
 \mathcal{L}_1(q)=\int q_j \ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j
-
 $$
 
 
@@ -773,12 +713,10 @@ $$
 对于第 2 项 $\mathcal{L}_2(q):$ 
 
 $$
-
 \begin{aligned}
 \mathcal{L}_2(q) & =\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \ln \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z}=\int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \sum_{i=1}^M \ln q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z} \\
 & =\sum_{i=1}^M \int \prod_{i=1}^M q_i\left(\mathbf{z}_i\right) \ln q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z}=\sum_{i_1=1}^M \int \prod_{i_2=1}^M q_{i_2}\left(\mathbf{z}_{i_2}\right) \ln q_{i_1}\left(\mathbf{z}_{i_1}\right) \mathrm{d} \mathbf{z}
 \end{aligned}
-
 $$
 
 
@@ -789,12 +727,10 @@ $$
 积分项,考虑当 $i_1=j$ 时: 
 
 $$
-
 \begin{aligned}
 \int \prod_{i_2=1}^M q_{i_2}\left(\mathbf{z}_{i_2}\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z} & =\int q_j\left(\mathbf{z}_j\right) \prod_{i_2 \neq j} q_{i_2}\left(\mathbf{z}_{i_2}\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z} \\
 & =\int q_j\left(\mathbf{z}_j\right) \ln q_j\left(\mathbf{z}_j\right)\left\{\int \prod_{i_2 \neq j} q_{i_2}\left(\mathbf{z}_{i_2}\right) \prod_{i_2 \neq j} \mathrm{~d} \mathbf{z}_{i_2}\right\} \mathrm{d} \mathbf{z}_j
 \end{aligned}
-
 $$
 
 
@@ -806,9 +742,7 @@ $q_2\left(\mathbf{z}_2\right)$ 和 $q_3\left(\mathbf{z}_3\right)$, 即:
 
 
 $$
-
 \iiint q_1\left(\mathbf{z}_1\right) q_2\left(\mathbf{z}_2\right) q_3\left(\mathbf{z}_3\right) \mathrm{d} \mathbf{z}_1 \mathrm{~d} \mathbf{z}_2 \mathrm{~d} \mathbf{z}_3=\int q_1\left(\mathbf{z}_1\right) \int q_2\left(\mathbf{z}_2\right) \int q_3\left(\mathbf{z}_3\right) \mathrm{d} \mathbf{z}_3 \mathrm{~d} \mathbf{z}_2 \mathrm{~d} \mathbf{z}_1
-
 $$
 
 
@@ -819,9 +753,7 @@ $\int q_1\left(\mathbf{z}_1\right) \mathrm{d} \mathbf{z}_1=\int q_2\left(\mathbf
 
 
 $$
-
 \int \prod_{i_2=1}^M q_{i_2}\left(\mathbf{z}_{i_2}\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z}=\int q_j\left(\mathbf{z}_j\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j
-
 $$
 
 
@@ -829,12 +761,10 @@ $$
 进而第 2 项可化简为: 
 
 $$
-
 \begin{aligned}
 \mathcal{L}_2(q) & =\sum_{i_1=1}^M \int q_{i_1}\left(\mathbf{z}_{i_1}\right) \ln q_{i_1}\left(\mathbf{z}_{i_1}\right) \mathrm{d} \mathbf{z}_{i_1} \\
 & =\int q_j\left(\mathbf{z}_j\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j+\sum_{i_1 \neq j}^M \int q_{i_1}\left(\mathbf{z}_{i_1}\right) \ln q_{i_1}\left(\mathbf{z}_{i_1}\right) \mathrm{d} \mathbf{z}_{i_1}
 \end{aligned}
-
 $$
 
 
@@ -844,9 +774,7 @@ $$
 
 
 $$
-
 \mathcal{L}_2(q)=\int q_j\left(\mathbf{z}_j\right) \ln q_j\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j+\text { const }
-
 $$
 
 
@@ -868,7 +796,6 @@ $\ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right)$, 但该式却包
 项, 即: 
 
 $$
-
 \begin{aligned}
 & \int q_j\left\{\int \ln p(\mathbf{x}, \mathbf{z}) \prod_{i \neq j}^M\left(q_i\left(\mathbf{z}_i\right) \mathrm{d} \mathbf{z}_i\right)\right\} \mathrm{d} \mathbf{z}_j=\int q_j \mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})] \mathrm{d} \mathbf{z}_j \\
 & =\int q_j\left(\ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right)-\text { const }\right) \mathrm{d} \mathbf{z}_j \\
@@ -876,7 +803,6 @@ $$
 & =\int q_j \ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j-\text { const } \\
 &
 \end{aligned}
-
 $$
 
 
@@ -890,13 +816,11 @@ $$
 对于式(14.36), 可继续变形为: 
 
 $$
-
 \begin{aligned}
 \mathcal{L}(q) & =\int q_j \ln \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j-\int q_j \ln q_j \mathrm{~d} \mathbf{z}_j+\mathrm{const} \\
 & =\int q_j \ln \frac{\tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right)}{q_j} \mathrm{~d} \mathbf{z}_j+\mathrm{const} \\
 & =-\mathrm{KL}\left(q_j \| \tilde{p}\left(\mathbf{x}, \mathbf{z}_j\right)\right)+\mathrm{const}
 \end{aligned}
-
 $$
 
  注意, 在前面关于 "式(14.32) 式(14.34)的推导" 中提到,
@@ -924,12 +848,10 @@ $\ln q_j=\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]+\mathrm{const}$,
 对式(14.39)两边同时取 $\exp (\cdot)$ 操作, 得 
 
 $$
-
 \begin{aligned}
 q_j^*\left(\mathbf{z}_j\right) & =\exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]+\text { const }\right) \\
 & =\exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \cdot \exp (\text { const })
 \end{aligned}
-
 $$
 
  两边同时取积分 $\int(\cdot) \mathrm{d} \mathbf{z}_j$
@@ -938,33 +860,27 @@ $\int q_j^*\left(\mathbf{z}_j\right) \mathrm{d} \mathbf{z}_j=1$, 因此有
 
 
 $$
-
 \begin{aligned}
 1 & =\int \exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \cdot \exp (\text { const }) \mathrm{d} \mathbf{z}_j \\
 & =\exp (\text { const }) \int \exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \mathrm{d} \mathbf{z}_j
 \end{aligned}
-
 $$
 
  这里就是将常数拿到了积分号外面, 因此:
 
 
 $$
-
 \exp (\text { const })=\frac{1}{\int \exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \mathrm{d} \mathbf{z}_j}
-
 $$
 
 
 代入刚开始的表达式, 可得本式: 
 
 $$
-
 \begin{aligned}
 q_j^*\left(\mathbf{z}_j\right) & =\exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \cdot \exp (\text { const }) \\
 & =\frac{\exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right)}{\int \exp \left(\mathbb{E}_{i \neq j}[\ln p(\mathbf{x}, \mathbf{z})]\right) \mathrm{d} \mathbf{z}_j}
 \end{aligned}
-
 $$
 
  实际上, 本式的分母为归一化因子, 以保证
@@ -982,12 +898,10 @@ $q_j^*\left(\mathbf{z}_j\right)$ 为概率分布。
 
 
 $$
-
 p(\boldsymbol W,\boldsymbol z,\boldsymbol \beta,\boldsymbol \theta | \boldsymbol \alpha,\boldsymbol \eta) =
 \prod_{t=1}^{T}p(\boldsymbol \theta_t | \boldsymbol \alpha)
 \prod_{k=1}^{K}p(\boldsymbol \beta_k | \boldsymbol \eta) 
 (\prod_{n=1}^{N}P(w_{t,n} | z_{t,n}, \boldsymbol \beta_k)P( z_{t,n} | \boldsymbol \theta_t))
-
 $$
 
 
@@ -1016,9 +930,7 @@ $p(\mathbf{W}, \mathbf{z}, \boldsymbol{\beta}, \boldsymbol{\Theta} \mid \boldsym
 
 
 $$
-
 p_{\boldsymbol{\alpha}, \boldsymbol{\eta}}(\mathbf{z}, \boldsymbol{\beta}, \boldsymbol{\Theta} \mid \mathbf{W})=\frac{p_{\boldsymbol{\alpha}, \boldsymbol{\eta}}(\mathbf{W}, \mathbf{z}, \boldsymbol{\beta}, \boldsymbol{\Theta})}{p_{\boldsymbol{\alpha}, \boldsymbol{\eta}}(\mathbf{W})}
-
 $$
 
 

+ 15 - 3
docs/chapter15/chapter15.md

@@ -7,11 +7,16 @@
 ### 15.1.1 式(15.2)和式(15.3)的解释
 
 似然率统计量LRS定义为:
+
 $$\mathrm{LRS}=2 \cdot\left(\hat{m}_{+} \log _{2} \frac{\left(\frac{\hat{m}_{+}}{\hat{m}_{+}+\hat{m}_{-}}\right)}{\left(\frac{m_{+}}{m_{+}+m_{-}}\right)}+\hat{m}_{-} \log _{2} \frac{\left(\frac{\hat{m}_{-}}{\hat{m}_{+}+\hat{m}_{-}}\right)}{\left(\frac{m_{-}}{m_{+}+m_{-}}\right)}\right)$$
-同时,根据对数函数的定义,我们可以对式(15.3)进行化简: $$\begin{aligned}
+
+同时,根据对数函数的定义,我们可以对式(15.3)进行化简: 
+
+$$\begin{aligned}
 \mathrm{F}_{-} \text {Gain }&=\hat{m}_{+} \times\left(\log _{2} \frac{\hat{m}_{+}}{\hat{m}_{+}+\hat{m}_{-}}-\log _{2} \frac{m_{+}}{m_{+}+m_{-}}\right)\\
 &=\hat{m}_{+}\left(\log_{2}\frac{\frac{\hat{m}_{+}}{\hat{m}_{+}+\hat{m}_{-}}}{\frac{m_{+}}{m_{+}+m_{-}}}\right)
 \end{aligned}$$
+
 可以观察到F_Gain即为式(15.2)中LRS求和项中的第一项。这里"西瓜书"中做了详细的解释,FOIL仅考虑正例的信息量,由于关系数据中正例数旺旺远少于反例数,因此通常对正例应该赋予更多的关注。
 
 ## 15.2 归纳逻辑程序设计
@@ -28,8 +33,11 @@ $C=A\vee B$,把$A=C_1 - \{L\}$和$L=C_2-\{\neg L\}$带入即得。
 
 根据式(15.7) $C=\left(C_1-\{L\}\right) \vee\left(C_2-\{\neg L\}\right)$
 和析合范式的删除操作,等式两边同时删除析合项$C_2-\{\neg L\}$有:
+
 $$C - (C_1 - \{L\}) = C_2-\{\neg L\}$$
+
 再次运用析合范式删除操作符的逆定义,等式两边同时加上析合项$\{\neg L\}$有:
+
 $$C_2=\left(C-\left(C_1-\{L\}\right)\right) \vee\{\neg L\}$$
 
 ### 15.2.4 式(15.10)的解释
@@ -54,6 +62,10 @@ $q \leftarrow A \wedge C$的共同逻辑子句$A$提取出来,并用逻辑文
 
 ### 15.2.8 式(15.16)的推导
 
-$\theta_1$为作者笔误,由15.9 $$\begin{aligned}
+$\theta_1$为作者笔误,由15.9
+
+$$\begin{aligned}
 C_{2}&=\left(C-\left(C_{1}-\{L_1\}\right)\right) \vee\{L_2\}\\
-\end{aligned}$$ 因为 $L_2=(\neg L_1\theta_1)\theta_2^{-1}$,替换得证。
+\end{aligned}$$
+
+因为 $L_2=(\neg L_1\theta_1)\theta_2^{-1}$,替换得证。

+ 49 - 17
docs/chapter16/chapter16.md

@@ -29,18 +29,28 @@ $\tau$很小,$Q$值大的动作更容易被选中(利用)。
 
 ### 16.3.1 式(16.7)的解释
 
-因为 $$\pi(x,a)=P(action=a|state=x)$$
+因为 
+
+$$\pi(x,a)=P(action=a|state=x)$$
+
 表示在状态$x$下选择动作$a$的概率,又因为动作事件之间两两互斥且和为动作空间,由全概率展开公式
-$$P(A)=\sum_{i=1}^{\infty}P(B_{i})P(A\mid B_{i})$$ 可得
+
+$$P(A)=\sum_{i=1}^{\infty}P(B_{i})P(A\mid B_{i})$$
+
+可得
+
 $$\begin{aligned}
 &\mathbb{E}_{\pi}[\frac{1}{T}r_{1}+\frac{T-1}{T}\frac{1}{T-1}\sum_{t=2}^{T}r_{t}\mid x_{0}=x]\\
 &=\sum_{a\in A}\pi(x,a)\sum_{x{}'\in X}P_{x\rightarrow x{}'}^{a}(\frac{1}{T}R_{x\rightarrow x{}'}^{a}+\frac{T-1}{T}\mathbb{E}_{\pi}[\frac{1}{T-1}\sum_{t=1}^{T-1}r_{t}\mid x_{0}=x{}'])
-\end{aligned}$$ 其中
+\end{aligned}$$
+
+其中
+
 $$r_{1}=\pi(x,a)P_{x\rightarrow x{}'}^{a}R_{x\rightarrow x{}'}^{a}$$
+
 最后一个等式用到了递归形式。
 
-Bellman
-等式定义了当前状态与未来状态之间的关系,表示当前状态的价值函数可以通过下个状态的价值函数来计算。
+Bellman等式定义了当前状态与未来状态之间的关系,表示当前状态的价值函数可以通过下个状态的价值函数来计算。
 
 ### 16.3.2 式(16.8)的推导
 
@@ -62,8 +72,7 @@ V_{\gamma }^{\pi}(x)&=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty }\gamma^{t}r_{t+1}\mid
 
 ### 16.3.5 式(16.15)的解释
 
-最优 Bellman
-等式表明:最佳策略下的一个状态的价值必须等于在这个状态下采取最好动作得到的累积奖赏值的期望。
+最优 Bellman等式表明:最佳策略下的一个状态的价值必须等于在这个状态下采取最好动作得到的累积奖赏值的期望。
 
 ### 16.3.6 式(16.16)的推导
 
@@ -79,19 +88,25 @@ V^{\pi}(x) & \leqslant Q^{\pi}\left(x, \pi^{\prime}(x)\right) \\
 &\leqslant \cdots \\
 &\leqslant \sum_{x^{\prime} \in X} P_{x \rightarrow x^{\prime}}^{\pi^{\prime}(x)}\left(R_{x \rightarrow x^{\prime}}^{\pi^{\prime}(x)}+\sum_{x'^{\prime} \in X} P_{x' \rightarrow x^{''}}^{\pi^{\prime}(x')}\left(\gamma R_{x' \rightarrow x^{\prime \prime}}^{\pi^{\prime}(x')}+\sum_{x'^{\prime} \in X} P_{x'' \rightarrow x^{'''}}^{\pi^{\prime}(x'')} \left(\gamma^2 R_{x'' \rightarrow x^{\prime \prime \prime}}^{\pi^{\prime}(x'')}+\cdots \right)\right)\right) \\
 &= V^{\pi'}(x) 
-\end{aligned}$$ 其中,使用了动作改变条件
-$$Q^{\pi}(x,\pi{}'(x))\geqslant V^{\pi}(x)$$ 以及状态-动作值函数
+\end{aligned}$$
+
+其中,使用了动作改变条件
+
+$$Q^{\pi}(x,\pi{}'(x))\geqslant V^{\pi}(x)$$
+
+以及状态-动作值函数
+
 $$Q^{\pi}(x{}',\pi{}'(x{}'))=\sum_{x{}'\in X}P_{x{}'\rightarrow x{}'}^{\pi{}'(x{}')}(R_{x{}'\rightarrow x{}'}^{\pi{}'(x{}')}+\gamma V^{\pi}(x{}'))$$
+
 于是,当前状态的最优值函数为
+
 $$V^{\ast}(x)=V^{\pi{}'}(x)\geqslant V^{\pi}(x)$$
 
 ## 16.4 免模型学习
 
 ### 16.4.1 式(16.20)的解释
 
-如果 $\epsilon_k=\frac{1}{k}$,并且其值随 $k$ 增大而主角趋于零,则
-$\epsilon-$ 贪心是在无限的探索中的极限贪心(Greedy in the Limit with
-Infinite Exploration,简称GLIE)。
+如果 $\epsilon_k=\frac{1}{k}$,并且其值随 $k$ 增大而主角趋于零,则$\epsilon-$ 贪心是在无限的探索中的极限贪心(Greedy in the Limit with Infinite Exploration,简称GLIE)。
 
 ### 16.4.2 式(16.23)的解释
 
@@ -101,9 +116,17 @@ Weight),其用于修正两个分布的差异。
 ### 16.4.3 式(16.31)的推导
 
 对比公式16.29
+
 $$Q_{t+1}^{\pi}(x,a)=Q_{t}^{\pi}(x,a)+\frac{1}{t+1}(r_{t+1}-Q_{t}^{\pi}(x,a))$$
-以及由 $$\frac{1}{t+1}=\alpha$$ 可知,若下式成立,则公式16.31成立
+
+以及由
+
+$$\frac{1}{t+1}=\alpha$$
+
+可知,若下式成立,则公式16.31成立
+
 $$r_{t+1}=R_{x\rightarrow x{}'}^{a}+\gamma Q_{t}^{\pi}(x{}',a{}')$$
+
 而$r_{t+1}$表示$t+1$步的奖赏,即状态$x$变化到$x'$的奖赏加上前面$t$步奖赏总和$Q_{t}^{\pi}(x{}',a{}')$的$\gamma$折扣,因此这个式子成立。
 
 ## 16.5 值函数近似
@@ -116,20 +139,29 @@ $$r_{t+1}=R_{x\rightarrow x{}'}^{a}+\gamma Q_{t}^{\pi}(x{}',a{}')$$
 
 $$\begin{aligned}
 -\frac{\partial E_{\boldsymbol{\theta}}}{\partial \boldsymbol{\theta}} & = -\frac{\partial \mathbb{E}_{\boldsymbol{x} \sim \pi}\left[\left(V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})\right)^2\right]}{\partial \boldsymbol{\theta}}\\
-\end{aligned}$$ 将
-$V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})$
+\end{aligned}$$
+
+将$V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})$
 看成一个整体,根据链式法则(chain rule)可知
+
 $$-\frac{\partial \mathbb{E}_{\boldsymbol{x} \sim \pi}\left[\left(V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})\right)^2\right]}{\partial \boldsymbol{\theta}}=\mathbb{E}_{\boldsymbol{x} \sim \pi}\left[2\left(V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})\right) \frac{\partial V_{\boldsymbol{\theta}}(\boldsymbol{x})}{\partial \boldsymbol{\theta}}\right]$$
+
 $V_{\boldsymbol{\theta}}(\boldsymbol{x})$
 是一个标量,$\boldsymbol{\theta}$
 是一个向量,$\frac{\partial V_{\boldsymbol{\theta}}(\boldsymbol{x})}{\partial \boldsymbol{\theta}}$
-属于矩阵微积分中的标量对向量求偏导,因此 $$\begin{aligned}
+属于矩阵微积分中的标量对向量求偏导,因此
+
+$$\begin{aligned}
 \frac{\partial V_{\boldsymbol{\theta}}(\boldsymbol{x})}{\partial \boldsymbol{\theta}}&=
 \frac{\partial \boldsymbol{\theta}^\mathrm{T}{\boldsymbol{x}}}{\partial \boldsymbol{\theta}} \\
 & =\left[\frac{\partial \boldsymbol{\theta}^{\mathrm{T}} \boldsymbol{x}}{\partial \theta_1}, \frac{\partial \boldsymbol{\theta}^{\mathrm{T}} \boldsymbol{x}}{\partial \theta_2}, \cdots,\frac{\partial \boldsymbol{\theta}^{\mathrm{T}}\boldsymbol{x}}{\partial \theta_n}\right]^{\mathrm{T}} \\
 & =\left[x_1, x_2, \cdots,x_m\right]^{\mathrm{T}} \\
 & =\boldsymbol{x}
-\end{aligned}$$ 故 $$\begin{aligned}
+\end{aligned}$$
+
+故
+
+$$\begin{aligned}
 -\frac{\partial E_{\boldsymbol{\theta}}}{\partial \boldsymbol{\theta}} & =\mathbb{E}_{\boldsymbol{x} \sim \pi}\left[2\left(V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})\right) \frac{\partial V_{\boldsymbol{\theta}}(\boldsymbol{x})}{\partial \boldsymbol{\theta}}\right] \\
 & =\mathbb{E}_{\boldsymbol{x} \sim \pi}\left[2\left(V^\pi(\boldsymbol{x})-V_{\boldsymbol{\theta}}(\boldsymbol{x})\right) \boldsymbol{x}\right]
 \end{aligned}$$

+ 1 - 2
docs/chapter4/chapter4.md

@@ -246,7 +246,6 @@ $$
 所以可进一步推得式(4.5)
 
 $$
-
 \operatorname{Gini}(D)=\sum_{k=1}^{|\mathcal{Y}|} \sum_{k^{\prime} \neq k} p_k p_{k^{\prime}}=1-\sum_{k=1}^{\mid \mathcal{Y |}} p_k^2
 $$
 
@@ -412,7 +411,7 @@ $$
 
 ### 4.4.1 式(4.7)的解释
 
-此式所表达的思想很简单,就是以每两个相邻取值的中点作为划分点。下面以"西瓜书"中表4.3中西瓜数据集3.0为例来说明此式的用法。对于"密度"这个连续属性,已观测到的可能取值为$\{0.243,0.245,0.343,\linebreak0.360,0.403,0.437,0.481,0.556,0.593,0.608,0.634,0.639,0.657,0.666,0.697,0.719,0.774\}$共17个值,根据式(4.7)可知,此时$i$依次取1到16,那么"密度"这个属性的候选划分点集合为
+此式所表达的思想很简单,就是以每两个相邻取值的中点作为划分点。下面以"西瓜书"中表4.3中西瓜数据集3.0为例来说明此式的用法。对于"密度"这个连续属性,已观测到的可能取值为$\{0.243,0.245,0.343,0.360,0.403,0.437,0.481,0.556,0.593,0.608,0.634,0.639,0.657,0.666,0.697,0.719,0.774\}$共17个值,根据式(4.7)可知,此时$i$依次取1到16,那么"密度"这个属性的候选划分点集合为
 
 $$
 \begin{aligned}

+ 0 - 60
docs/chapter5/chapter5.md

@@ -19,9 +19,7 @@
 
 
 $$
-
 y=f\left(\sum\limits_{i=1}^{n}w_ix_i-\theta\right)=f(\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x}-\theta)
-
 $$
 
 
@@ -29,21 +27,17 @@ $$
 
 
 $$
-
 y=\varepsilon(\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x}-\theta)=\left\{\begin{array}{rcl}
 1,& {\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x} -\theta\geqslant 0};\\
 0,& {\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x} -\theta < 0}.\\
 \end{array} \right.
-
 $$
 
  由于$n$维空间中的超平面方程为
 
 
 $$
-
 w_1x_1+w_2x_2+\cdots+w_nx_n+b  =\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x} +b=0
-
 $$
 
 
@@ -53,9 +47,7 @@ $$
 
 
 $$
-
 T=\{(\boldsymbol{x}_1,y_1),(\boldsymbol{x}_2,y_2),\cdots,(\boldsymbol{x}_N,y_N)\}
-
 $$
 
 
@@ -63,9 +55,7 @@ $$
 
 
 $$
-
 \boldsymbol{w}^{\mathrm{T}}\boldsymbol{x}+b=0
-
 $$
 
 
@@ -75,9 +65,7 @@ $$
 
 
 $$
-
 \boldsymbol{w}^{\mathrm{T}}\boldsymbol{x}-\theta=0
-
 $$
 
 
@@ -85,9 +73,7 @@ $$
 
 
 $$
-
 (\hat{y}-y)\left(\boldsymbol{w}^\mathrm{T}\boldsymbol{x}-\theta\right)\geqslant0
-
 $$
 
 
@@ -95,10 +81,8 @@ $$
 
 
 $$
-
 L(\boldsymbol{w},\theta)=\sum_{\boldsymbol{x}\in M}(\hat{y}-y)
 \left(\boldsymbol{w}^\mathrm{T}\boldsymbol{x}-\theta\right)
-
 $$
 
 
@@ -108,9 +92,7 @@ $$
 
 
 $$
-
 T=\{(\boldsymbol{x}_1,y_1),(\boldsymbol{x}_2,y_2),\cdots,(\boldsymbol{x}_N,y_N)\}
-
 $$
 
 
@@ -118,9 +100,7 @@ $$
 
 
 $$
-
 \min\limits_{\boldsymbol{w},\theta}L(\boldsymbol{w},\theta)=\min\limits_{\boldsymbol{w},\theta}\sum_{\boldsymbol{x_i}\in M}(\hat{y}_i-y_i)(\boldsymbol{w}^\mathrm{T}\boldsymbol{x}_i-\theta)
-
 $$
 
 
@@ -128,9 +108,7 @@ $$
 
 
 $$
-
 -\theta=-1\cdot w_{n+1}=x_{n+1}\cdot w_{n+1}
-
 $$
 
 
@@ -138,14 +116,12 @@ $$
 
 
 $$
-
 \begin{aligned}
 \boldsymbol{w}^\mathrm{T}\boldsymbol{x_i}-\theta&=\sum
 \limits_{j=1}^n w_jx_j+x_{n+1}\cdot w_{n+1}\\ 
 &=\sum\limits_{j=1}^{n+1}w_jx_j\\
 &=\boldsymbol{w}^{\mathrm{T}}\boldsymbol{x_i}
 \end{aligned}
-
 $$
 
 
@@ -154,9 +130,7 @@ $$
 
 
 $$
-
 \min\limits_{\boldsymbol{w}}L(\boldsymbol{w})=\min\limits_{\boldsymbol{w}}\sum_{\boldsymbol{x_i}\in M}(\hat{y}_i-y_i)\boldsymbol{w}^\mathrm{T}\boldsymbol{x_i}
-
 $$
 
 
@@ -164,9 +138,7 @@ $$
 
 
 $$
-
 \nabla_{\boldsymbol{w}}L(\boldsymbol{w})=\sum_{\boldsymbol{x_i}\in M}(\hat{y}_i-y_i)\boldsymbol{x_i}
-
 $$
 
 
@@ -175,18 +147,14 @@ $$
 
 
 $$
-
 \boldsymbol w \leftarrow \boldsymbol w+\Delta \boldsymbol w
-
 $$
 
 
 
 
 $$
-
 \Delta \boldsymbol w=-\eta(\hat{y}_i-y_i)\boldsymbol x_i=\eta(y_i-\hat{y}_i)\boldsymbol x_i
-
 $$
 
 
@@ -200,9 +168,7 @@ $$
 
 
 $$
-
 (x_1,x_2)\rightarrow h_1=\varepsilon(x_1-x_2-0.5),h_2=\varepsilon(x_2-x_1-0.5)\rightarrow y=\varepsilon(h_1+h_2-0.5)
-
 $$
 
 
@@ -219,16 +185,13 @@ $$
 因为 
 
 $$
-
 \Delta \theta_j = -\eta \cfrac{\partial E_k}{\partial \theta_j}
-
 $$
 
 
 
 $$
-
 \begin{aligned} 
 \cfrac{\partial E_k}{\partial \theta_j} &= \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot\cfrac{\partial \hat{y}_j^k}{\partial \theta_j} \\
 &= \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot\cfrac{\partial [f(\beta_j-\theta_j)]}{\partial \theta_j} \\
@@ -240,16 +203,13 @@ $$
 &=(y_j^k-\hat{y}_j^k)\hat{y}_j^k\left(1-\hat{y}_j^k\right)  \\
 &= g_j
 \end{aligned}
-
 $$
 
  所以
 
 
 $$
-
 \Delta \theta_j = -\eta \cfrac{\partial E_k}{\partial \theta_j}=-\eta g_j
-
 $$
 
 
@@ -259,16 +219,13 @@ $$
 因为 
 
 $$
-
 \Delta v_{ih} = -\eta \cfrac{\partial E_k}{\partial v_{ih}}
-
 $$
 
 
 
 $$
-
 \begin{aligned} 
 \cfrac{\partial E_k}{\partial v_{ih}} &= \sum_{j=1}^{l} \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot \cfrac{\partial \hat{y}_j^k}{\partial \beta_j} \cdot \cfrac{\partial \beta_j}{\partial b_h} \cdot \cfrac{\partial b_h}{\partial \alpha_h} \cdot \cfrac{\partial \alpha_h}{\partial v_{ih}} \\
 &= \sum_{j=1}^{l} \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot \cfrac{\partial \hat{y}_j^k}{\partial \beta_j} \cdot \cfrac{\partial \beta_j}{\partial b_h} \cdot \cfrac{\partial b_h}{\partial \alpha_h} \cdot x_i \\ 
@@ -279,16 +236,13 @@ $$
 &= -b_h(1-b_h) \cdot \sum_{j=1}^{l} g_j \cdot w_{hj}  \cdot x_i \\
 &= -e_h \cdot x_i
 \end{aligned}
-
 $$
 
  所以
 
 
 $$
-
 \Delta v_{ih} =-\eta \cfrac{\partial E_k}{\partial v_{ih}} =\eta e_h x_i
-
 $$
 
 
@@ -298,16 +252,13 @@ $$
 因为 
 
 $$
-
 \Delta \gamma_h = -\eta \cfrac{\partial E_k}{\partial \gamma_h}
-
 $$
 
 
 
 $$
-
 \begin{aligned} 
 \cfrac{\partial E_k}{\partial \gamma_h} &= \sum_{j=1}^{l} \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot \cfrac{\partial \hat{y}_j^k}{\partial \beta_j} \cdot \cfrac{\partial \beta_j}{\partial b_h} \cdot \cfrac{\partial b_h}{\partial \gamma_h} \\
 &= \sum_{j=1}^{l} \cfrac{\partial E_k}{\partial \hat{y}_j^k} \cdot \cfrac{\partial \hat{y}_j^k}{\partial \beta_j} \cdot \cfrac{\partial \beta_j}{\partial b_h} \cdot f^{\prime}(\alpha_h-\gamma_h) \cdot (-1) \\
@@ -316,16 +267,13 @@ $$
 &= \sum_{j=1}^{l}g_j\cdot w_{hj} \cdot b_h(1-b_h)\\
 &=e_h
 \end{aligned}
-
 $$
 
  所以
 
 
 $$
-
 \Delta \gamma_h=-\eta\cfrac{\partial E_k}{\partial \gamma_h} = -\eta e_h
-
 $$
 
 
@@ -353,9 +301,7 @@ Machine,简称RBM)本质上是一个引入了隐变量的无向图模型,
 
 
 $$
-
 E_{\rm graph}=E_{\rm edges}+E_{\rm nodes}
-
 $$
 
 
@@ -363,9 +309,7 @@ $$
 
 
 $$
-
 E_{\rm edges}=\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}E_{{\rm edge}_{ij}}=-\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}w_{ij}s_is_j
-
 $$
 
 
@@ -373,9 +317,7 @@ $$
 
 
 $$
-
 E_{\rm nodes}=\sum_{i=1}^nE_{{\rm node}_i}=-\sum_{i=1}^n\theta_is_i
-
 $$
 
 
@@ -383,9 +325,7 @@ $$
 
 
 $$
-
 E_{\rm graph}=E_{\rm edges}+E_{\rm nodes}=-\sum_{i=1}^{n-1}\sum_{j=i+1}^{n}w_{ij}s_is_j-\sum_{i=1}^n\theta_is_i
-
 $$
 
 

Failā izmaiņas netiks attēlotas, jo tās ir par lielu
+ 0 - 170
docs/chapter6/chapter6.md


+ 0 - 136
docs/chapter7/chapter7.md

@@ -10,9 +10,7 @@
 
 
 $$
-
 R(c_i|\boldsymbol x)=1*P(c_1|\boldsymbol x)+...+1*P(c_{i-1}|\boldsymbol x)+0*P(c_i|\boldsymbol x)+1*P(c_{i+1}|\boldsymbol x)+...+1*P(c_N|\boldsymbol x)
-
 $$
 
 
@@ -20,9 +18,7 @@ $$
 
 
 $$
-
 R(c_i|\boldsymbol x)=1-P(c_i|\boldsymbol x)
-
 $$
 
  此即式(7.5)。
@@ -50,13 +46,11 @@ $$
 根据式(7.11)和式(7.10)可知参数求解式为 
 
 $$
-
 \begin{aligned}
 \hat{\boldsymbol{\theta}}_{c}&=\underset{\boldsymbol{\theta}_{c}}{\arg \max } LL\left(\boldsymbol{\theta}_{c}\right) \\
 &=\underset{\boldsymbol{\theta}_{c}}{\arg \min } -LL\left(\boldsymbol{\theta}_{c}\right) \\
 &= \underset{\boldsymbol{\theta}_{c}}{\arg \min }-\sum_{\boldsymbol{x} \in D_{c}} \log P\left(\boldsymbol{x} | \boldsymbol{\theta}_{c}\right)
 \end{aligned}
-
 $$
 
 
@@ -64,9 +58,7 @@ $$
 
 
 $$
-
 P\left(\boldsymbol{x} | \boldsymbol{\theta}_{c}\right)=P\left(\boldsymbol{x} | \boldsymbol{\mu}_{c}, \boldsymbol{\sigma}_{c}^{2}\right)=\frac{1}{\sqrt{(2 \pi)^{d}|\boldsymbol{\Sigma}_c|}} \exp \left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}-\boldsymbol{\mu}_c)\right)
-
 $$
 
 
@@ -74,14 +66,12 @@ $$
 
 
 $$
-
 \begin{aligned}
 (\hat{\boldsymbol{\mu}}_{c}, \hat{\boldsymbol{\Sigma}}_{c})&= \underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }-\sum_{\boldsymbol{x} \in D_{c}} \log\left[\frac{1}{\sqrt{(2 \pi)^{d}|\boldsymbol{\Sigma}_c|}} \exp \left(-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}-\boldsymbol{\mu}_c)\right)\right] \\
 &= \underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }-\sum_{\boldsymbol{x} \in D_{c}} \left[-\frac{d}{2}\log(2 \pi)-\frac{1}{2}\log|\boldsymbol{\Sigma}_c|-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}-\boldsymbol{\mu}_c)\right] \\
 &= \underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }\sum_{\boldsymbol{x} \in D_{c}} \left[\frac{d}{2}\log(2 \pi)+\frac{1}{2}\log|\boldsymbol{\Sigma}_c|+\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}-\boldsymbol{\mu}_c)\right] \\
 &= \underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }\sum_{\boldsymbol{x} \in D_{c}} \left[\frac{1}{2}\log|\boldsymbol{\Sigma}_c|+\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}-\boldsymbol{\mu}_c)\right] \\
 \end{aligned}
-
 $$
 
 
@@ -89,12 +79,10 @@ $$
 
 
 $$
-
 \begin{aligned}
 (\hat{\boldsymbol{\mu}}_{c}, \hat{\boldsymbol{\Sigma}}_{c})&=\underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }\sum_{i=1}^{n} \left[\frac{1}{2}\log|\boldsymbol{\Sigma}_c|+\frac{1}{2}(\boldsymbol{x}_{i}-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}_{i}-\boldsymbol{\mu}_c)\right]\\
 &=\underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }\frac{n}{2}\log|\boldsymbol{\Sigma}_c|+\sum_{i=1}^{n}\frac{1}{2}(\boldsymbol{x}_i-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}_i-\boldsymbol{\mu}_c)\\
 \end{aligned}
-
 $$
 
 
@@ -102,7 +90,6 @@ $$
 
 
 $$
-
 \begin{aligned}
 &\sum_{i=1}^{n}\frac{1}{2}(\boldsymbol{x}_i-\boldsymbol{\mu}_c)^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{x}_i-\boldsymbol{\mu}_c)\\
 =&\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_c^{-1}\sum_{i=1}^{n}(\boldsymbol{x}_i-\boldsymbol{\mu}_c)(\boldsymbol{x}_i-\boldsymbol{\mu}_c)^{\mathrm{T}}\right]\\
@@ -116,16 +103,13 @@ $$
 =&\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_c^{-1}\sum_{i=1}^{n}(\boldsymbol{x}_i-\bar{\boldsymbol{x}})(\boldsymbol{x}_i-\bar{\boldsymbol{x}})^{\mathrm{T}}\right]+\frac{n}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_c^{-1}(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})^{\mathrm{T}}\right]\\
 =&\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_c^{-1}\sum_{i=1}^{n}(\boldsymbol{x}_i-\bar{\boldsymbol{x}})(\boldsymbol{x}_i-\bar{\boldsymbol{x}})^{\mathrm{T}}\right]+\frac{n}{2}(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})
 \end{aligned}
-
 $$
 
  所以
 
 
 $$
-
 (\hat{\boldsymbol{\mu}}_{c}, \hat{\boldsymbol{\Sigma}}_{c})=\underset{(\boldsymbol{\mu}_{c},\boldsymbol{\Sigma}_c)}{\arg \min }\frac{n}{2}\log|\boldsymbol{\Sigma}_c|+\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_{c}^{-1}\sum_{i=1}^{n}(\boldsymbol{x}_i-\bar{\boldsymbol{x}})(\boldsymbol{x}_i-\bar{\boldsymbol{x}})^{\mathrm{T}}\right]+\frac{n}{2}(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})^{\mathrm{T}} \boldsymbol{\Sigma}_c^{-1}(\boldsymbol{\mu}_c-\bar{\boldsymbol{x}})
-
 $$
 
 
@@ -133,9 +117,7 @@ $$
 
 
 $$
-
 \hat{\boldsymbol{\mu}}_{c}=\bar{\boldsymbol{x}}=\frac{1}{n}\sum_{i=1}^{n}\boldsymbol{x}_i
-
 $$
 
 
@@ -143,9 +125,7 @@ $$
 
 
 $$
-
 \hat{\boldsymbol{\Sigma}}_{c}=\underset{\boldsymbol{\Sigma}_c}{\arg \min }\frac{n}{2}\log|\boldsymbol{\Sigma}_c|+\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}_{c}^{-1}\sum_{i=1}^{n}(\boldsymbol{x}_i-\bar{\boldsymbol{x}})(\boldsymbol{x}_i-\bar{\boldsymbol{x}})^{\mathrm{T}}\right]
-
 $$
 
 
@@ -155,9 +135,7 @@ $$
 
 
 $$
-
 \frac{n}{2}\log|\boldsymbol{\Sigma}|+\frac{1}{2}\operatorname{tr}\left[\boldsymbol{\Sigma}^{-1}\mathbf{B}\right]\geq\frac{n}{2}\log|\mathbf{B}|+\frac{pn}{2}(1-\log n)
-
 $$
 
 
@@ -176,21 +154,17 @@ $$
 
 
 $$
-
 \begin{aligned}
 L(\theta)&=\theta\cdot\theta\cdot(1-\theta)\cdot\theta\cdot(1-\theta)\\
 &=\theta^{3}(1-\theta)^2
 \end{aligned}
-
 $$
 
  对数似然为
 
 
 $$
-
 LL(\theta)=\ln L(\theta)=3\ln\theta+2\ln (1-\theta)
-
 $$
 
 
@@ -198,13 +172,11 @@ $$
 
 
 $$
-
 \begin{aligned}
     \frac{\partial LL(\theta)}{\partial\theta}&=\frac{\partial\left(3\ln\theta+2\ln (1-\theta)\right)}{\partial\theta}\\
     &=\frac{3}{\theta}-\frac{2}{1-\theta}\\
     &=\frac{3-5\theta}{\theta(1-\theta)}
 \end{aligned}
-
 $$
 
 
@@ -226,10 +198,8 @@ $D=\{x_1,x_2,\cdots,x_n\}$,则根据贝叶斯式可得,在给定样本集$D$
 
 
 $$
-
 P(\theta|D)=\frac{P(D|\theta)P(\theta)}{P(D)}=\frac{P(D|\theta)P(\theta)}
 {\sum_{\theta}P(D|\theta)P(\theta)}
-
 $$
 
 
@@ -237,11 +207,9 @@ $$
 
 
 $$
-
 P(\theta|D)=\frac{P(D|\theta)P(\theta)}
 {\sum_{\theta}P(D|\theta)P(\theta)}=\frac{\prod_{i=1}^{n}P(x_i|\theta)
 P(\theta)}{\sum_{\theta}\prod_{i=1}^{n}P(x_i|\theta)P(\theta)}
-
 $$
 
 
@@ -259,9 +227,7 @@ Categorical分布又称为广义伯努利分布,是将伯努利分布中的随
 
 
 $$
-
 P(X=x_i)=p(x_i)=\theta_i
-
 $$
 
 
@@ -276,11 +242,9 @@ $$
 
 
 $$
-
 p(\boldsymbol{x};\boldsymbol{\alpha})=\frac{\Gamma \left(\sum _{i=1}^{k}\alpha _{i}\right)}
 {\prod _{i=1}^{k}\Gamma (\alpha _{i})}\prod
 _{i=1}^{k}x_{i}^{\alpha _{i}-1}
-
 $$
 
  其中$\Gamma (z)=\int
@@ -295,9 +259,7 @@ d}x$为Gamma函数,当$\boldsymbol{\alpha}=(1,1,\cdots,1)$时,Dirichlet分
 
 
 $$
-
 P(C=c_i)=P(c_i)=\theta_i
-
 $$
 
 
@@ -308,16 +270,13 @@ $$
 
 
 $$
-
 P(D|\boldsymbol{\theta})=\theta_1^{y_1}...\theta_k^{y_k}=\prod_{i=1}^{k}\theta_i^{y_i}
-
 $$
 
 
 则有后验概率 
 
 $$
-
 \begin{aligned}
 P(\boldsymbol{\theta}|D)&=\frac{P(D|\boldsymbol{\theta})P(\boldsymbol{\theta})}{P(D)}\\
 &=\frac{P(D|\boldsymbol{\theta})P(\boldsymbol{\theta})}{\sum_{\boldsymbol{\theta}}
@@ -326,7 +285,6 @@ P(D|\boldsymbol{\theta})P(\boldsymbol{\theta})}\\
 P(\boldsymbol{\theta})}{\sum_{\boldsymbol{\theta}}\left[\prod_{i=1}^{k}\theta_i^{y_i}\cdot
 P(\boldsymbol{\theta})\right]}
 \end{aligned}
-
 $$
 
 
@@ -334,16 +292,13 @@ $$
 
 
 $$
-
 P(\boldsymbol{\boldsymbol{\theta}};\boldsymbol{\alpha})=\frac{\Gamma \left(\sum_{i=1}^{k}\alpha_{i}\right)}{\prod_{i=1}^{k}\Gamma (\alpha_{i})}\prod_{i=1}^{k}\theta_{i}^{\alpha_{i}-1}
-
 $$
 
 
 将其代入$P(D|\boldsymbol{\theta})$可得 
 
 $$
-
 \begin{aligned}
 P(\boldsymbol{\theta}|D)&=\dfrac{\prod_{i=1}^{k}\theta_i^{y_i}
 \cdot P(\boldsymbol{\theta})}{\sum_{\boldsymbol{\theta}}
@@ -371,7 +326,6 @@ _{i=1}^{k}\theta_{i}^{\alpha _{i}-1}}
 &=\dfrac{\prod_{i=1}^{k}\theta_i^{\alpha_{i}+y_i-1}}{\sum_{\boldsymbol{\theta}}
 \left[\prod_{i=1}^{k}\theta_i^{\alpha_{i}+y_i-1}\right]}
 \end{aligned}
-
 $$
 
 
@@ -379,7 +333,6 @@ $$
 \mathbb{R}^{k}$,则根据Dirichlet分布的定义可知 
 
 $$
-
 \begin{aligned}
 P(\boldsymbol{\theta};\boldsymbol{\alpha}+\boldsymbol{y})&=
 \dfrac{\Gamma \left(\sum _{i=1}^{k}(\alpha_{i}+y_i)\right)}{\prod _{i=1}^{k}\Gamma (\alpha_{i}+y_i)}\prod _{i=1}^{k}\theta_{i}^{\alpha_{i}+y_i-1} \\
@@ -397,14 +350,12 @@ _{i=1}^{k}(\alpha_{i}+y_i)\right)}{\prod _{i=1}^{k}\Gamma
 _{i=1}^{k}\theta_{i}^{\alpha_{i}+y_i-1}\right] \\
 \frac{1}{\sum_{\boldsymbol{\theta}}\left[\prod _{i=1}^{k}\theta_{i}^{\alpha_{i}+y_i-1}\right]}&=\frac{\Gamma \left(\sum _{i=1}^{k}(\alpha_{i}+y_i)\right)}{\prod _{i=1}^{k}\Gamma (\alpha_{i}+y_i)} \\
 \end{aligned}
-
 $$
 
  将此结论代入$P(D|\boldsymbol{\theta})$可得
 
 
 $$
-
 \begin{aligned}
 P(\boldsymbol{\theta}|D)&=\frac{\prod_{i=1}^{k}\theta_i^{\alpha_{i}+y_i-1}}{\sum_{\boldsymbol{\theta}}\left[\prod_{i=1}^{k}\theta_i^{\alpha_{i}+y_i-1}\right]}\\
 &=\frac{\Gamma \left(\sum _{i=1}^{k}(\alpha_{i}+y_i)\right)}{\prod
@@ -412,7 +363,6 @@ _{i=1}^{k}\Gamma
 (\alpha_{i}+y_i)}\prod _{i=1}^{k}\theta_{i}^{\alpha _{i}+y_i-1} \\
 &=P(\boldsymbol{\theta};\boldsymbol{\alpha}+\boldsymbol{y})
 \end{aligned}
-
 $$
 
 
@@ -420,7 +370,6 @@ $$
 
 
 $$
-
 \begin{aligned}
 \theta_i&=\mathbb E_{P(\boldsymbol{\theta}|D)}[\theta_i]\\
 &=\mathbb E_{P(\boldsymbol{\theta};\boldsymbol{\alpha}+\boldsymbol{y})}[\theta_i]\\
@@ -428,7 +377,6 @@ $$
 &=\frac{\alpha_i+y_i}{\sum_{j=1}^k\alpha_j+\sum_{j=1}^ky_j}\\
 &=\frac{\alpha_i+y_i}{\sum_{j=1}^k\alpha_j+m}\\
 \end{aligned}
-
 $$
 
 
@@ -446,9 +394,7 @@ $$
 
 
 $$
-
 I(x_i,x_j|y)=\sum_{n=1}^{N}P(x_i,x_j|c_n)\log\frac{P(x_i,x_j|c_n)}{P(x_i|c_n)P(x_j|c_n)}
-
 $$
 
 
@@ -460,13 +406,11 @@ $$
 
 
 $$
-
 \begin{aligned}
 P(\boldsymbol{x}, c) & =P\left(x_1, x_2, \ldots, x_d, c\right) \\
 & =P\left(x_1, x_2, \ldots, x_d \mid c\right) P(c) \\
 & =P\left(x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_d \mid c, x_i\right) P\left(c, x_i\right)
 \end{aligned}
-
 $$
 
 
@@ -475,9 +419,7 @@ $$
 
 
 $$
-
 P(x_1,...,x_{i-1},x_{i+1},...,x_d|c,x_i)=\prod_{j=1\\j\neq i}^{d}P(x_j|c,x_i)
-
 $$
 
 
@@ -485,23 +427,19 @@ $$
 
 
 $$
-
 P(x_1,...,x_{i-1},x_{i+1},...,x_d|c,x_i)=\prod_{j=1}^{d}P(x_j|c,x_i)
-
 $$
 
 
 综上可得: 
 
 $$
-
 \begin{aligned}
 P(c|\boldsymbol{x})&=\frac{P(\boldsymbol{x},c)}{P(\boldsymbol{x})}\\ 
 &=\frac{P\left(c, x_i\right)P\left(x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_d \mid c, x_i\right)}{P(\boldsymbol{x})}\\
 &\propto P\left(c, x_i\right)P\left(x_1, \ldots, x_{i-1}, x_{i+1}, \ldots, x_d \mid c, x_i\right) \\
 &=P\left(c, x_i\right)\prod_{j=1}^{d}P(x_j|c,x_i)
 \end{aligned}
-
 $$
 
 
@@ -520,27 +458,23 @@ $$
 
 
 $$
-
 \begin{aligned} 
 P(x_3,x_4|x_1)&=\frac{P(x_1,x_3,x_4)}{P(x_1)} \\
 &=\frac{P(x_1)P(x_3|x_1)P(x_4|x_1)}{P(x_1)} \\
 &=P(x_3|x_1)P(x_4|x_1) \\
 \end{aligned}
-
 $$
 
  顺序结构:在给定节点$x$的条件下$y,z$独立
 
 
 $$
-
 \begin{aligned} 
 P(y,z|x)&=\frac{P(x,y,z)}{P(x)} \\
 &=\frac{P(z)P(x|z)P(y|x)}{P(x)} \\
 &=\frac{P(z,x)P(y|x)}{P(x)} \\
 &=P(z|x)P(y|x) \\
 \end{aligned}
-
 $$
 
 
@@ -555,9 +489,7 @@ $$
 
 
 $$
-
 f\left(t x_1 + (1-t)x_2\right)\leqslant tf(x_1)+(1-t)f(x_2)
-
 $$
 
 
@@ -565,9 +497,7 @@ $$
 
 
 $$
-
 f(t_1 x_1 + t_2x_2+...+t_nx_n)\leqslant t_1f(x_1)+t_2f(x_2)+...+t_nf(t_n)
-
 $$
 
 
@@ -575,9 +505,7 @@ $$
 
 
 $$
-
 \varphi(\mathbb{E}[X])\leqslant \mathbb{E}[\varphi(X)]
-
 $$
 
 
@@ -589,12 +517,10 @@ $$
 
 
 $$
-
 \begin{aligned} 
 LL(\theta) &=\sum_{i=1}^{m} \ln p(x_i; \theta) \\ 
 &=\sum_{i=1}^{m} \ln \sum_{z_i} p(x_i, z_i; \theta) 
 \end{aligned}
-
 $$
 
 
@@ -606,13 +532,11 @@ $$
 
 
 $$
-
 \begin{aligned} 
 LL(\theta)&=\ln P(X\vert \theta)\\
 &=\ln \sum_Z P(X,Z\vert\theta)\\
 &=\ln \left(\sum_Z P(X\vert Z,\theta)P(Z\vert \theta)\right)
 \end{aligned}
-
 $$
 
 
@@ -620,18 +544,15 @@ EM算法采用的是通过迭代逐步近似极大化$L(\theta)$:假设第$t$
 
 
 $$
-
 \begin{aligned}
 LL(\theta)-LL(\theta^{(t)})&=\ln \left(\sum_Z P(X\vert Z,\theta)P(Z\vert \theta)\right)-\ln P(X\vert\theta^{(t)}) \\
 &=\ln \left(\sum_Z P(Z\vert X,\theta^{(t)}) \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})}\right)-\ln P(X\vert\theta^{(t)})
 \end{aligned}
-
 $$
 
  由上述Jensen不等式可得 
 
 $$
-
 \begin{aligned}
 LL(\theta)-LL(\theta^{(t)})
 &\geqslant \sum_Z P(Z\vert X,\theta^{(t)})\ln \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})}-\ln P(X\vert\theta^{(t)}) \\
@@ -640,25 +561,20 @@ LL(\theta)-LL(\theta^{(t)})
 &=\sum_Z P(Z\vert X,\theta^{(t)}) \left( \ln \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})} - \ln P(X\vert\theta^{(t)}) \right)\\
 &= \sum_Z P(Z\vert X,\theta^{(t)})\ln \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})P(X\vert\theta^{(t)})}
 \end{aligned}
-
 $$
 
 
 
 $$
-
 B(\theta,\theta^{(t)})=LL(\theta^{(t)})+\sum_Z P(Z\vert X,\theta^{(t)})\ln \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})P(X\vert\theta^{(t)})}
-
 $$
 
 
 
 $$
-
 LL(\theta)\geqslant B(\theta,\theta^{(t)})
-
 $$
 
 
@@ -666,9 +582,7 @@ $$
 
 
 $$
-
 B(\theta^{(t+1)},\theta^{(t)}) \geqslant B(\theta,\theta^{(t)})
-
 $$
 
 
@@ -676,18 +590,14 @@ $$
 
 
 $$
-
 LL(\theta^{(t+1)})\geqslant B(\theta^{(t+1)},\theta^{(t)})\geqslant B(\theta^{(t)},\theta^{(t)})=LL(\theta^{(t)})
-
 $$
 
 
 
 
 $$
-
 LL(\theta^{(t+1)})\geqslant LL(\theta^{(t)})
-
 $$
 
 
@@ -695,24 +605,20 @@ $$
 
 
 $$
-
 \begin{aligned}
 \theta^{(t+1)}&=\mathop{\arg\max}_{\theta}B(\theta,\theta^{(t)}) \\
 &=\mathop{\arg\max}_{\theta}\left( LL(\theta^{(t)})+\sum_Z P(Z\vert X,\theta^{(t)})\ln \cfrac{P(X\vert Z,\theta)P(Z\vert \theta)}{P(Z\vert X,\theta^{(t)})P(X\vert\theta^{(t)})}\right)
 \end{aligned}
-
 $$
 
  略去对$\theta$极大化而言是常数的项 
 
 $$
-
 \begin{aligned}
 \theta^{(t+1)}&=\mathop{\arg\max}_{\theta}\left(\sum_Z P(Z\vert X,\theta^{(t)})\ln\left( P(X\vert Z,\theta)P(Z\vert \theta)\right)\right) \\
 &=\mathop{\arg\max}_{\theta}\left(\sum_Z P(Z\vert X,\theta^{(t)})\ln P(X,Z\vert \theta)\right) \\
 &=\mathop{\arg\max}_{\theta}Q(\theta,\theta^{(t)})
 \end{aligned}
-
 $$
 
 
@@ -722,9 +628,7 @@ E步:计算完全数据的对数似然函数$\ln P(X,Z\vert \theta)$关于在
 
 
 $$
-
 Q(\theta,\theta^{(t)})=\mathbb{E}_Z[\ln P(X,Z\vert \theta)\vert X,\theta^{(t)}]=\sum_Z P(Z\vert X,\theta^{(t)})\ln P(X,Z\vert \theta)
-
 $$
 
 
@@ -735,13 +639,11 @@ M步:求使得$Q(\theta,\theta^{(t)})$达到极大的$\theta^{(t+1)}$。
 
 
 $$
-
 \begin{aligned} 
 LL(\theta) &=\sum_{i=1}^{m} \ln p(x_i; \theta) \\ 
 &=\sum_{i=1}^{m} \ln \sum_{z_i} p(x_i, z_i; \theta) \\
 &=\sum_{i=1}^{m} \ln \sum_{z_i} Q_i(z_i)\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)} \\
 \end{aligned}
-
 $$
 
 
@@ -749,9 +651,7 @@ $$
 
 
 $$
-
 \sum_{z_i} Q_i(z_i)\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}=\mathbb{E}_{z_i}\left[\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}\right]
-
 $$
 
 
@@ -759,29 +659,23 @@ $$
 
 
 $$
-
 \ln\left(\mathbb{E}_{z_i}\left[\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}\right]\right)\geqslant \mathbb{E}_{z_i}\left[\ln\left(\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}\right)\right]
-
 $$
 
 
 
 
 $$
-
 \ln\sum_{z_i} Q_i(z_i)\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}\geqslant \sum_{z_i} Q_i(z_i)\ln\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}
-
 $$
 
 
 将此式代入$LL(\theta)$可得 
 
 $$
-
 \begin{aligned} 
 LL(\theta) &=\sum_{i=1}^{m} \ln \sum_{z_i} Q_i(z_i)\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}\geqslant \sum_{i=1}^{m}\sum_{z_i} Q_i(z_i)\ln\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)} \quad \textcircled{1}
 \end{aligned}
-
 $$
 
 
@@ -789,54 +683,42 @@ $$
 
 
 $$
-
 \cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}=c
-
 $$
 
 
 
 
 $$
-
 p(x_i, z_i; \theta)=c\cdot Q_i(z_i)
-
 $$
 
 
 
 
 $$
-
 \sum_{z_i}p(x_i, z_i; \theta)=c\cdot \sum_{z_i}Q_i(z_i)
-
 $$
 
 
 
 
 $$
-
 \sum_{z_i}p(x_i, z_i; \theta)=c
-
 $$
 
 
 
 
 $$
-
 \cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)}=\sum_{z_i}p(x_i, z_i; \theta)
-
 $$
 
 
 
 
 $$
-
 Q_i(z_i)=\cfrac{p(x_i, z_i; \theta)}{\sum\limits_{z_i}p(x_i, z_i; \theta)}=\cfrac{p(x_i, z_i; \theta)}{p(x_i; \theta)}=p(z_i|x_i; \theta)
-
 $$
 
 
@@ -844,14 +726,12 @@ $$
 
 
 $$
-
 \begin{aligned} 
 LL(\theta) &=\sum_{i=1}^{m} \ln \sum_{z_i} Q_i(z_i)\cfrac{p(x_i, z_i; \theta)}{Q_i(z_i)} & \quad \textcircled{2}\\
 &=\sum_{i=1}^{m} \ln \sum_{z_i}p(z_i|x_i; \theta)\cfrac{p(x_i, z_i; \theta)}{p(z_i|x_i; \theta)} & \quad \textcircled{3}\\
 &=\sum_{i=1}^{m}\sum_{z_i} p(z_i|x_i; \theta)\ln\cfrac{p(x_i, z_i; \theta)}{p(z_i|x_i; \theta)} & \quad \textcircled{4}\\
 &=\max\{B(\theta)\} & \quad \textcircled{5} \\
 \end{aligned}
-
 $$
 
 
@@ -859,20 +739,17 @@ $$
 
 
 $$
-
 \begin{aligned} 
 \theta^{(t+1)}&=\arg\max_{\theta}\max\{B(\theta)\}  & \quad \textcircled{6}\\
 &=\arg\max_{\theta}\sum_{i=1}^{m}\sum_{z_i} p(z_i|x_i;\theta^{(t)})\ln\cfrac{p(x_i, z_i; \theta)}{p(z_i|x_i; \theta^{(t)})}  & \quad \textcircled{7}\\
 &=\arg\max_{\theta}\sum_{i=1}^{m}\sum_{z_i} p(z_i|x_i;\theta^{(t)})\ln p(x_i, z_i; \theta) & \quad \textcircled{8}
 \end{aligned}
-
 $$
 
  此时将$\theta^{(t+1)}$代入$LL(\theta)$可推得
 
 
 $$
-
 \begin{aligned} 
 LL(\theta^{(t+1)}) &=\max\{B(\theta^{(t+1)})\} &\quad\textcircled{9} \\
 &=\sum_{i=1}^{m}\sum_{z_i} p(z_i|x_i; \theta^{(t+1)})\ln\cfrac{p(x_i, z_i; \theta^{(t+1)})}{p(z_i|x_i; \theta^{(t+1)})} &\quad\textcircled{10}\\
@@ -881,7 +758,6 @@ LL(\theta^{(t+1)}) &=\max\{B(\theta^{(t+1)})\} &\quad\textcircled{9} \\
 &=\max\{B(\theta^{(t)})\} &\quad\textcircled{13} \\
 &=LL(\theta^{(t)})&\quad\textcircled{14}
 \end{aligned}
-
 $$
 
 
@@ -889,9 +765,7 @@ $$
 
 
 $$
-
 Q(\theta,\theta^{(t)})=\sum_{i=1}^{m}\sum_{z_i} p(z_i|x_i; \theta^{(t)})\ln p(x_i, z_i; \theta)
-
 $$
 
 
@@ -905,14 +779,12 @@ M步:求使得$Q(\theta,\theta^{(t)})$到达极大的$\theta^{(t+1)}$。
 
 
 $$
-
 \begin{aligned} Q(\theta|\theta^{(t)})&=\sum_Z P(Z|X,\theta^{(t)})\ln P(X,Z|\theta) \\
 &=\sum_{z_1,z_2,...,z_m}\left\{\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\ln\left[ \prod_{i=1}^m P(x_i,z_i|\theta) \right] \right\} \\
 &=\sum_{z_1,z_2,...,z_m}\left\{\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\left[ \sum_{i=1}^m\ln P(x_i,z_i|\theta) \right] \right\} \\
 &=\sum_{z_1,z_2,...,z_m}\left\{\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\left[\ln P(x_1,z_1|\theta) + \ln P(x_2,z_2|\theta) +...+ \ln P(x_m,z_m|\theta)\right] \right\} \\
 &=\sum_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_1,z_1|\theta) \right]+...+\sum_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_m,z_m|\theta) \right]  \\
 \end{aligned}
-
 $$
 
 
@@ -920,7 +792,6 @@ $$
 
 
 $$
-
 \begin{aligned} 
 &\sum\limits_{z_1,z_2,...,z_m}\left[\prod\limits_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_1,z_1|\theta) \right] \\
 =&\sum\limits_{z_1,z_2,...,z_m}\left[\prod_{i=2}^mP(z_i|x_i,\theta^{(t)})\cdot P(z_1|x_1,\theta^{(t)})\cdot\ln P(x_1,z_1|\theta) \right] \\
@@ -933,42 +804,35 @@ $$
 =&\sum_{z_1}P(z_1|x_1,\theta^{(t)})\ln P(x_1,z_1|\theta)\times \left\{1\times1\times...\times1\right\} \\
 =&\sum_{z_1}P(z_1|x_1,\theta^{(t)})\ln P(x_1,z_1|\theta)  \\
 \end{aligned}
-
 $$
 
  所以
 
 
 $$
-
 \sum\limits_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_1,z_1|\theta) \right]=\sum_{z_1}P(z_1|x_1,\theta^{(t)})\ln P(x_1,z_1|\theta)
-
 $$
 
 
 同理可得 
 
 $$
-
 \begin{aligned} 
 \sum\limits_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_2,z_2|\theta) \right] &=\sum_{z_2}P(z_2|x_2,\theta^{(t)})\ln P(x_2,z_2|\theta) \\
 &\vdots\\
 \sum\limits_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_m,z_m|\theta) \right] &=\sum_{z_m}P(z_m|x_m,\theta^{(t)})\ln P(x_m,z_m|\theta)
 \end{aligned}
-
 $$
 
  将上式代入$Q(\theta|\theta^{(t)})$可得
 
 
 $$
-
 \begin{aligned} 
 Q(\theta|\theta^{(t)})&=\sum_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_1,z_1|\theta) \right]+...+\sum_{z_1,z_2,...,z_m}\left[\prod_{i=1}^mP(z_i|x_i,\theta^{(t)})\cdot\ln P(x_m,z_m|\theta) \right]  \\
 &=\sum_{z_1}P(z_1|x_1,\theta^{(t)})\ln P(x_1,z_1|\theta) +...+\sum_{z_m}P(z_m|x_m,\theta^{(t)})\ln P(x_m,z_m|\theta) \\
 &=\sum_{i=1}^m\sum_{z_i}P(z_i|x_i,\theta^{(t)})\ln P(x_i,z_i|\theta)\\
 \end{aligned}
-
 $$
 
 

Failā izmaiņas netiks attēlotas, jo tās ir par lielu
+ 0 - 176
docs/chapter8/chapter8.md


+ 0 - 76
docs/chapter9/chapter9.md

@@ -21,9 +21,7 @@ learning)。
 
 
 $$
-
 JC=\frac{|A\bigcap B|}{|A\bigcup B|}=\frac{|A\bigcap B|}{|A|+|B|-|A\bigcap B|}
-
 $$
 
 
@@ -42,9 +40,7 @@ Jaccard系数可以用来描述两个集合的相似程度。
 
 
 $$
-
 \mathrm{JC}=\frac{M_{11}}{M_{11}+M_{10}+M_{01}}
-
 $$
 
 
@@ -67,9 +63,7 @@ $$
 
 
 $$
-
 \mathrm{JC}=\frac{|A\bigcap B|}{|A\bigcup B|}=\frac{|SS|}{|SS\bigcup SD\bigcup DS|}=\frac{a}{a+b+c}
-
 $$
 
 
@@ -78,9 +72,7 @@ $$
 
 
 $$
-
 \mathrm{JC}=\frac{M_{11}}{M_{11}+M_{10}+M_{01}}=\frac{a}{a+b+c}
-
 $$
 
 
@@ -96,9 +88,7 @@ Rand Index定义如下:
 
 
 $$
-
 \mathrm{RI}=\frac{a+d}{a+b+c+d}=\frac{a+d}{m(m-1)/2}=\frac{2(a+d)}{m(m-1)}
-
 $$
 
 
@@ -154,12 +144,10 @@ $16 \sim 17$ ), 即 $m_{u, a, 2}=3$, 坏瓜中根蒂为稍蜷的样本共有 4 
 
 
 $$
-
 \begin{aligned}
 \operatorname{VDM}_p(a, b) & =\left|\frac{m_{u, a, 1}}{m_{u, a}}-\frac{m_{u, b, 1}}{m_{u, b}}\right|^p+\left|\frac{m_{u, a, 2}}{m_{u, a}}-\frac{m_{u, b, 2}}{m_{u, b}}\right|^p \\
 & =\left|\frac{5}{8}-\frac{3}{7}\right|^p+\left|\frac{3}{8}-\frac{4}{7}\right|^p
 \end{aligned}
-
 $$
 
 
@@ -179,9 +167,7 @@ kmeans 函数供调用。学习向量量化也是无监督聚类的一种方式,
 
 
 $$
-
 p(\boldsymbol{x})=\frac{1}{(2 \pi)^{\frac{n}{2}}|\boldsymbol{\Sigma}|^{\frac{1}{2}}} e^{-\frac{1}{2}(\boldsymbol{x}-\boldsymbol{\mu})^{\top} \boldsymbol{\Sigma}^{-1}(\boldsymbol{x}-\boldsymbol{\mu})}
-
 $$
 
 
@@ -189,9 +175,7 @@ $$
 
 
 $$
-
 p(x)=\frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu)^2}{2 \sigma^2}}
-
 $$
 
 
@@ -262,9 +246,7 @@ $p_{\mathcal{M}}\left(z_j=i \mid \boldsymbol{x}_j\right)$ 可写为
 
 
 $$
-
 p_{\mathcal{M}}\left(z_j=i \mid \boldsymbol{x}_j\right)=\frac{P\left(z_j=i\right) \cdot p_{\mathcal{M}}\left(\boldsymbol{x}_j \mid z_j=i\right)}{p_{\mathcal{M}}\left(\boldsymbol{x}_j\right)}
-
 $$
 
 
@@ -287,14 +269,12 @@ $j$ 行第例的元素, 矩阵 $\Gamma$ 大小为 $m \times k$, 即
 
 
 $$
-
 \Gamma=\left[\begin{array}{cccc}
 \gamma_{11} & \gamma_{12} & \cdots & \gamma_{1 k} \\
 \gamma_{21} & \gamma_{22} & \cdots & \gamma_{2 k} \\
 \vdots & \vdots & \ddots & \vdots \\
 \gamma_{m 1} & \gamma_{m 2} & \cdots & \gamma_{m k}
 \end{array}\right]_{m \times k}
-
 $$
 
  其中 $m$ 为训练集样本个数, $k$
@@ -341,9 +321,7 @@ $\prod_{j=1}^m p_{\mathcal{M}}\left(\boldsymbol{x}_j\right)$,
 
 
 $$
-
 p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)=\frac{1}{(2 \pi)^{\frac{n}{2}}\left|\boldsymbol{\Sigma}_{i}\right|^{\frac{1}{2}}} \exp \left(-\frac{1}{2}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{T} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\right)
-
 $$
 
 
@@ -351,22 +329,18 @@ $$
 
 
 $$
-
 \frac{\partial L L(D)}{\partial \boldsymbol{\mu}_{i}}=\frac{\partial L L(D)}{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \cdot \frac{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\partial \boldsymbol{\mu}_{i}}=0
-
 $$
 
 
 其中: 
 
 $$
-
 \begin{aligned}
 \frac{\partial L L(D)}{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \mathbf{\Sigma}_{i}\right)} &=\frac{\partial \sum_{j=1}^{m} \ln \left(\sum_{l=1}^{k} \alpha_{l} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{l}, \boldsymbol{\Sigma}_{l}\right)\right)}{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \\
 &=\sum_{j=1}^{m} \frac{\partial \ln \left(\sum_{l=1}^{k} \alpha_{l} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{l}, \boldsymbol{\Sigma}_{l}\right)\right)}{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)} \\
 &=\sum_{j=1}^{m} \frac{\alpha_{i}}{\sum_{l=1}^{k} \alpha_{l} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{l}, \boldsymbol{\Sigma}_{l}\right)}
 \end{aligned}
-
 $$
 
 
@@ -374,7 +348,6 @@ $$
 
 
 $$
-
 \begin{aligned}
 \frac{\partial p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\partial \boldsymbol{\mu}_{i}} &=\frac{\partial \frac{1}{(2 \pi)^{\frac{n}{2}}\left|\Sigma_{i}\right|^{\frac{1}{2}}} \exp\left({-\frac{1}{2}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top}\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)}\right)}{\partial \boldsymbol{\mu}_{i}} \\
 &=\frac{1}{(2 \pi)^{\frac{n}{2}}\left|\boldsymbol{\Sigma}_{i}\right|^{\frac{1}{2}}} \cdot \frac{\partial \exp\left({-\frac{1}{2}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)}\right)}{\partial \boldsymbol{\mu}_{i}}\\
@@ -382,7 +355,6 @@ $$
 &=\frac{1}{(2 \pi)^{\frac{n}{2}}\left|\boldsymbol{\Sigma}_{i}\right|^{\frac{1}{2}}}\cdot \exp\left({-\frac{1}{2}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)}\right) \cdot\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)\\
 &=p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)
 \end{aligned}
-
 $$
 
 
@@ -390,21 +362,17 @@ $$
 
 
 $$
-
 \begin{aligned}
 -\frac{1}{2} \frac{\partial\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)^{\top} \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)}{\partial \boldsymbol{\mu}_{i}} &=-\frac{1}{2} \cdot 2 \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{\mu}_{i}-\boldsymbol{x}_{j}\right) \\
 &=\boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)
 \end{aligned}
-
 $$
 
  因此有:
 
 
 $$
-
 \frac{\partial L L(D)}{\partial \boldsymbol{\mu}_{i}}=\sum_{j=1}^{m} \frac{\alpha_{i}}{\sum_{l=1}^{k} \alpha_{l} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{l}, \mathbf{\Sigma}_{l}\right)} \cdot p\left(\boldsymbol{x}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right) \cdot \boldsymbol{\Sigma}_{i}^{-1}\left(\boldsymbol{x}_{j}-\boldsymbol{\mu}_{i}\right)=0
-
 $$
 
 
@@ -415,9 +383,7 @@ $$
 
 
 $$
-
 \gamma_{j i}=p_{\mathcal{M}}\left(z_{j}=i | \mathbf{X}_{j}\right)=\frac{\alpha_{i} \cdot p\left(\mathbf{X}_{j} | \boldsymbol{\mu}_{i}, \boldsymbol{\Sigma}_{i}\right)}{\sum_{l=1}^{k} \alpha_{l} \cdot p\left(\mathbf{X}_{j} | \boldsymbol{\mu}_{l}, \boldsymbol{\Sigma}_{l}\right)}
-
 $$
 
 
@@ -425,9 +391,7 @@ $$
 
 
 $$
-
 \sum_{j=1}^{m} \gamma_{j i}\left(\mathbf{X}_{j}-\boldsymbol{\mu}_{i}\right)=0
-
 $$
 
 
@@ -435,9 +399,7 @@ $$
 
 
 $$
-
 \sum_{j=1}^m \gamma_{j i} \boldsymbol{x}_j=\sum_{j=1}^m \gamma_{j i} \boldsymbol{\mu}_i=\boldsymbol{\mu}_i \cdot \sum_{j=1}^m \gamma_{j i}
-
 $$
 
 
@@ -446,9 +408,7 @@ $$
 
 
 $$
-
 \boldsymbol{\mu}_i=\frac{\sum_{j=1}^m \gamma_{j i} \boldsymbol{x}_j}{\sum_{j=1}^m \gamma_{j i}}
-
 $$
 
 
@@ -459,9 +419,7 @@ $$
 
 
 $$
-
 p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})=\cfrac{1}{(2\pi)^\frac{n}{2}\left| \boldsymbol\Sigma_{i}\right |^\frac{1}{2}}\exp\left({-\frac{1}{2}(\boldsymbol x_{j}-\boldsymbol\mu_{i})^T\boldsymbol\Sigma_{i}^{-1}(\boldsymbol x_{j}-\boldsymbol\mu_{i})}\right)
-
 $$
 
 
@@ -469,28 +427,23 @@ $$
 
 
 $$
-
 \cfrac{\partial LL(D)}{\partial \boldsymbol\Sigma_{i}}=0
-
 $$
 
  可得
 
 
 $$
-
 \begin{aligned}
 \cfrac {\partial LL(D)}{\partial\boldsymbol\Sigma_{i}}&=\cfrac {\partial}{\partial \boldsymbol\Sigma_{i}}\left[\sum_{j=1}^m\ln\Bigg(\sum_{i=1}^k \alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\Bigg)\right] \\
 &=\sum_{j=1}^m\frac{\partial}{\partial\boldsymbol\Sigma_{i}}\left[\ln\Bigg(\sum_{i=1}^k \alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\Bigg)\right] \\
 &=\sum_{j=1}^m\cfrac{\alpha_{i}\cdot \cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left(p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\right)}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})} \\
 \end{aligned}
-
 $$
 
  其中 
 
 $$
-
 \begin{aligned}
 \cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left(p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\right)&=\cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left[\cfrac{1}{(2\pi)^\frac{n}{2}\left| \boldsymbol\Sigma_{i}\right |^\frac{1}{2}}\exp\left({-\frac{1}{2}(\boldsymbol x_{j}-\boldsymbol\mu_{i})^T\boldsymbol\Sigma_{i}^{-1}(\boldsymbol x_{j}-\boldsymbol\mu_{i})}\right)\right] \\
 &=\cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left\{\exp\left[\ln\left(\cfrac{1}{(2\pi)^\frac{n}{2}\left| \boldsymbol\Sigma_{i}\right |^\frac{1}{2}}\exp\left({-\frac{1}{2}(\boldsymbol x_{j}-\boldsymbol\mu_{i})^T\boldsymbol\Sigma_{i}^{-1}(\boldsymbol x_{j}-\boldsymbol\mu_{i})}\right)\right)\right]\right\} \\
@@ -498,7 +451,6 @@ $$
 &=p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\cdot\cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left[\ln\cfrac{1}{(2\pi)^{\frac{n}{2}}}-\cfrac{1}{2}\ln{|\boldsymbol{\Sigma}_i|}-\frac{1}{2}(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)\right]\\
 &=p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\cdot\left[-\cfrac{1}{2}\cfrac{\partial\left(\ln{|\boldsymbol{\Sigma}_i|}\right) }{\partial \boldsymbol{\Sigma}_i}-\cfrac{1}{2}\cfrac{\partial \left[(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)\right]}{\partial \boldsymbol{\Sigma}_i}\right]\\
 \end{aligned}
-
 $$
 
 
@@ -506,11 +458,9 @@ $$
 
 
 $$
-
 \begin{aligned}
 \cfrac{\partial}{\partial\boldsymbol\Sigma_{i}}\left(p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\right)&=p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})\cdot\left[-\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}+\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}\right]\\
 \end{aligned}
-
 $$
 
 
@@ -518,9 +468,7 @@ $$
 
 
 $$
-
 \cfrac {\partial LL(D)}{\partial\boldsymbol\Sigma_{i}}=\sum_{j=1}^m\cfrac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}\cdot\left[-\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}+\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}\right]
-
 $$
 
 
@@ -528,9 +476,7 @@ $$
 
 
 $$
-
 \cfrac {\partial LL(D)}{\partial\boldsymbol\Sigma_{i}}=\sum_{j=1}^m\gamma_{ji}\cdot\left[-\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}+\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}\right]
-
 $$
 
 
@@ -538,16 +484,13 @@ $$
 
 
 $$
-
 \cfrac {\partial LL(D)}{\partial\boldsymbol\Sigma_{i}}=\sum_{j=1}^m\gamma_{ji}\cdot\left[-\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}+\cfrac{1}{2}\boldsymbol{\Sigma}_i^{-1}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}\right]=0
-
 $$
 
 
 移项推导有: 
 
 $$
-
 \begin{aligned}
 \sum_{j=1}^m\gamma_{ji}\cdot\left[-\boldsymbol{I}+(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}\right]&=0\\
 \sum_{j=1}^m\gamma_{ji}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T\boldsymbol{\Sigma}_i^{-1}&=\sum_{j=1}^m\gamma_{ji}\boldsymbol{I}\\
@@ -555,7 +498,6 @@ $$
 \boldsymbol{\Sigma}_i^{-1}\cdot\sum_{j=1}^m\gamma_{ji}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T&=\sum_{j=1}^m\gamma_{ji}\\
 \boldsymbol{\Sigma}_i&=\cfrac{\sum_{j=1}^m\gamma_{ji}(\boldsymbol x_j-\boldsymbol\mu_i)(\boldsymbol x_j-\boldsymbol\mu_i)^T}{\sum_{j=1}^m\gamma_{ji}}
 \end{aligned}
-
 $$
 
  此即为公式(9.35)。
@@ -570,9 +512,7 @@ $$
 
 
 $$
-
 L L(D)=\sum_{j=1}^m \ln \left(\sum_{l=1}^k \alpha_l \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l\right)\right)
-
 $$
 
 
@@ -581,22 +521,18 @@ $\alpha_i$ 求导时与变量 $i$ 相 混淆。将式(9.36)中的两项分别对
 求导, 得 
 
 $$
-
 \begin{aligned}
 \frac{\partial L L(D)}{\partial \alpha_i} & =\frac{\partial \sum_{j=1}^m \ln \left(\sum_{l=1}^k \alpha_l \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l\right)\right)}{\partial \alpha_i} \\
 & =\sum_{j=1}^m \frac{1}{\sum_{l=1}^k \alpha_l \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l\right)} \cdot \frac{\partial \sum_{l=1}^k \alpha_l \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l\right)}{\partial \alpha_i} \\
 & =\sum_{j=1}^m \frac{1}{\sum_{l=1}^k \alpha_l \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_l, \boldsymbol{\Sigma}_l\right)} \cdot p\left(\boldsymbol{x}_j \mid \boldsymbol{\mu}_i, \boldsymbol{\Sigma}_i\right)
 \end{aligned}
-
 $$
 
 
 
 
 $$
-
 \frac{\partial\left(\sum_{l=1}^k \alpha_l-1\right)}{\partial \alpha_i}=\frac{\partial\left(\alpha_1+\alpha_2+\ldots+\alpha_i+\ldots+\alpha_k-1\right)}{\partial \alpha_i}=1
-
 $$
 
 
@@ -611,32 +547,26 @@ $$
 对公式(9.37)两边同时乘以$\alpha_{i}$可得 
 
 $$
-
 \begin{aligned}
 \sum_{j=1}^m\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}+\lambda\alpha_{i}=0\\
 \sum_{j=1}^m\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}=-\lambda\alpha_{i}
 \end{aligned}
-
 $$
 
  两边对所有混合成分求和可得
 
 
 $$
-
 \begin{aligned}\sum_{i=1}^k\sum_{j=1}^m\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}&=-\lambda\sum_{i=1}^k\alpha_{i}\\
 \sum_{j=1}^m\sum_{i=1}^k\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}&=-\lambda\sum_{i=1}^k\alpha_{i}
 \end{aligned}
-
 $$
 
  因为
 
 
 $$
-
 \sum_{i=1}^k\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\mathbf\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\mathbf\Sigma_{l})}=\frac{\sum_{i=1}^k\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\mathbf\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\mathbf\Sigma_{l})}=1
-
 $$
 
 
@@ -644,9 +574,7 @@ $$
 
 
 $$
-
 \sum_{j=1}^m\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}=-\lambda\alpha_{i}=m\alpha_{i}
-
 $$
 
 
@@ -654,9 +582,7 @@ $$
 
 
 $$
-
 \alpha_{i}=\cfrac{1}{m}\sum_{j=1}^m\frac{\alpha_{i}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{i},\boldsymbol\Sigma_{i})}{\sum_{l=1}^k\alpha_{l}\cdot p(\boldsymbol x_{j}|\boldsymbol\mu_{l},\boldsymbol\Sigma_{l})}
-
 $$
 
 
@@ -664,9 +590,7 @@ $$
 
 
 $$
-
 \alpha_{i}=\cfrac{1}{m}\sum_{j=1}^m\gamma_{ji}
-
 $$
 
  此即为公式(9.38)。

Daži faili netika attēloti, jo izmaiņu fails ir pārāk liels