softmax函數詳解及誤差反向傳播的梯度求導

摘要

本文給出 softmax 函數的定義, 並求解其在反向傳播中的梯度

相關

配套代碼, 請參考文章 :

Python 和 PyTorch 對比實現 softmax 及其反向傳播

系列文章索引 :
https://blog.csdn.net/oBrightLamp/article/details/85067981

正文

1. 定義

softmax函數常用於多分類問題的輸出層.
定義如下:
si=exit=1kextt=1kext=ex1+ex2+ex3++exki=1,2,3,,k s_{i} = \frac{e^{x_{i}}}{ \sum_{t = 1}^{k}e^{x_{t}}} \\ \quad \\ \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + e^{x_{2}} +e^{x_{3}} + \cdots +e^{x_{k}}\\ \quad \\ i = 1, 2, 3, \cdots, k

編程實現softmax函數計算的時候, 因爲存在指數運算 exie^{x_i}, 數值有可能非常大, 導致大數溢出.
一般在分式的分子和分母都乘以一個常數C, 變換成:

si=CexiCt=1kext=exi+logCt=1kext+logC=eximt=1kextmm=logC=max(xi) s_{i} = \frac{Ce^{x_{i}}}{ C\sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{i} + logC }}{ \sum_{t = 1}^{k}e^{x_{t} + logC}} = \frac{e^{x_{i} - m }}{ \sum_{t = 1}^{k}e^{x_{t} - m}} \\ \quad \\ m = - logC = max(x_{i})

C的值可自由選擇, 不會影響計算結果. 這裏 m 取 xix_i 的最大值, 將數據集的最大值偏移至0.

2. 梯度求導

考慮一個 softmax 變換:
x=(x1,x2,x3,,xk)s=softmax(x) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\
求 s1 對 x1 的導數:
s1=ex1t=1kext=ex1sumsum=t=1kext=ex1+t=2kextsumx1=t=1kextx1=ex1s1x1=ex1sumex1sumx1sum2=ex1sumex1ex1sum2=s1s12 s_{1} = \frac{e^{x_{1}}}{ \sum_{t = 1}^{k}e^{x_{t}}} = \frac{e^{x_{1}}}{ sum} \\ \quad \\ sum = \sum_{t = 1}^{k}e^{x_{t}} = e^{x_{1}} + \sum_{t = 2}^{k}e^{x_{t}}\\ \quad \\ \frac{\partial sum}{\partial x_{1}} = \frac{\partial \sum_{t = 1}^{k}e^{x_{t}}}{\partial x_{1}} = e^{x_{1}}\\ \quad \\ \frac{\partial s_{1}}{\partial x_{1}} =\frac{e^{x_{1}} \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{1}}}{sum^{2}}\\ \quad\\ =\frac{e^{x_{1}} \cdot sum -e^{x_{1}} \cdot e^{x_{1}}}{sum^{2}}\\ \quad\\ = s_{1} - s_{1}^{2} \\

分母中 x2 對 s1 的梯度也有影響, 求 s1 對 x2 的導數:

s1x2=0sumex1sumx2sum2=ex1ex2sum2=s1s2 \frac{\partial s_{1}}{\partial x_{2}}=\frac{0 \cdot sum -e^{x_{1}}\cdot \frac{\partial sum}{\partial x_{2}}}{sum^{2}} = \frac{ -e^{x_{1}} \cdot e^{x_{2}}}{sum^{2}} = - s_{1}s_{2}\\

同理可得:

sixj={si2+si,i=jsisj,ij \frac{\partial s_{i}}{\partial x_{j}} = \left\{ \begin{array}{rr} -s_{i}^{2} +s_{i}, & i = j\\ -s_{i}s_{j}, & i \neq j \end{array} \right.

展開可得 softmax 的梯度矩陣:

s(x)=(s1/x1s1/x2s1/xks2/x1s2/x2s2/xksk/x1sk/x2sk/xk)=(s1s1+s1s1s2s1sks2s1s2s2+s2s2sksks1sks2sksk+sk) \nabla s_{(x)}= \begin{pmatrix} \partial s_{1}/\partial x_{1}&\partial s_{1}/\partial x_{2}& \cdots&\partial s_{1}/\partial x_{k}\\ \partial s_{2}/\partial x_{1}&\partial s_{2}/\partial x_{2}& \cdots&\partial s_{2}/\partial x_{k}\\ \vdots & \vdots & \ddots & \vdots \\ \partial s_{k}/\partial x_{1}&\partial s_{k}/\partial x_{2}& \cdots&\partial s_{k}/\partial x_{k}\\ \end{pmatrix}= \begin{pmatrix} -s_{1}s_{1} + s_{1} & -s_{1}s_{2} & \cdots & -s_{1}s_{k} \\ -s_{2}s_{1} & -s_{2}s_{2} + s_{2} & \cdots & -s_{2}s_{k} \\ \vdots & \vdots & \ddots & \vdots \\ -s_{k}s_{1} & -s_{k}s_{2} & \cdots & -s_{k}s_{k} + s_{k} \end{pmatrix}

這是一個雅可比矩陣 (Jacobian) 矩陣.

3. 反向傳播

考慮一個輸入向量 x, 經 softmax 函數歸一化處理後得到向量 s, 往前 forward 傳播得出誤差值 error (標量 e ), 求 e 關於 x 的梯度.
x=(x1,x2,x3,,xk)s=softmax(x)e=forward(s) x = (x_1, x_2, x_3, \cdots, x_k)\\ \quad\\ s = softmax(x)\\ \quad\\ e = forward(s)
求解過程:
e(s)=(es1,es2,es3,,esk)exi=es1s1xi+es2s2xi+es3s3xi++eskskxi \nabla e_{(s)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \\ \quad\\ \frac{\partial e}{\partial x_i} = \frac{\partial e}{\partial s_1}\frac{\partial s_1}{\partial x_i} +\frac{\partial e}{\partial s_2}\frac{\partial s_2}{\partial x_i} +\frac{\partial e}{\partial s_3}\frac{\partial s_3}{\partial x_i} + \cdots +\frac{\partial e}{\partial s_k}\frac{\partial s_k}{\partial x_i}\\
展開 e/xi\partial e/\partial x_i 可得 e 關於 X 的梯度向量 :
e(x)=(es1,es2,es3,,esk)(s1/x1s1/x2s1/xks2/x1s2/x2s2/xksk/x1sk/x2sk/xk)  =e(s)(s1s1+s1s1s2s1sks2s1s2s2+s2s2sksks1sks2sksk+sk) \nabla e_{(x)} = (\frac{\partial e}{\partial s_1},\frac{\partial e}{\partial s_2},\frac{\partial e}{\partial s_3}, \cdots ,\frac{\partial e}{\partial s_k}) \begin{pmatrix} \partial s_{1}/\partial x_{1}&\partial s_{1}/\partial x_{2}& \cdots&\partial s_{1}/\partial x_{k}\\ \partial s_{2}/\partial x_{1}&\partial s_{2}/\partial x_{2}& \cdots&\partial s_{2}/\partial x_{k}\\ \vdots & \vdots & \ddots & \vdots \\ \partial s_{k}/\partial x_{1}&\partial s_{k}/\partial x_{2}& \cdots&\partial s_{k}/\partial x_{k}\\ \end{pmatrix}\\ \;\\ % ----------- = \nabla e_{(s)} \begin{pmatrix} -s_{1}s_{1} + s_{1} & -s_{1}s_{2} & \cdots & -s_{1}s_{k} \\ -s_{2}s_{1} & -s_{2}s_{2} + s_{2} & \cdots & -s_{2}s_{k} \\ \vdots & \vdots & \ddots & \vdots \\ -s_{k}s_{1} & -s_{k}s_{2} & \cdots & -s_{k}s_{k} + s_{k} \end{pmatrix}
所有的 e/si\partial e/\partial s_i 值都是已知的, 即是上游 forward 反向傳播回來的誤差梯度, 因此 e(s)\nabla e_{(s)} 也是已知的.

4. 有趣的性質

4.1 相對誤差

接上回例子, 觀察到 softmax 的梯度矩陣中, 同一列的元素相加 :
t=1kstxi=1 \sum_{t = 1}^{k} \frac{\partial s_{t}}{\partial x_{i}} = 1
若 e 對 s 的梯度向量中, 每一個元素都恆等於某個實數 a :
esia \frac{\partial e}{\partial s_{i}} \equiv a

e(x)0 \nabla e_{(x)} \equiv 0
即, 若上游梯度均勻, 則不傳遞誤差梯度.

4.2 收斂性質

若:
e=forward(x)=i=1kyilog(si)  e(s)=(y1s1,y2s2,,yksk)  yisia e=forward (x) = -\sum_{i = 1}^{k}y_{i}log(s_{i})\\ \;\\ \nabla e_{(s)}=( -\frac{y_1}{s_1}, -\frac{y_2}{s_2},\cdots,-\frac{y_k}{s_k})\\ \;\\ \frac{y_i}{s_i} \equiv a
這時就有 :
sisj=yiyj \frac{s_i}{s_j}=\frac{y_i}{y_j}

sis_i 概率分佈收斂至 yiy_i 的等比例概率分佈.

全文完.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章