業餘民科,垃圾內容勿看
神經網絡與拓撲
在CNN的讀書筆記之前,這裏先記錄一下神經網絡與拓撲的關係,源於腦洞文章《Neural Networks, Manifolds, and Topology》,只要你站的角度夠高,可以直擊問題的本質。
With each layer, the network transforms the data, creating a new representation. We can look at the data in each of these representations and how the network classfies them. - 《Neural Networks, Manifolds, and Topology》
A layer (W + ) consists of:
- A linear transformation by the “weight” matrix
- A translation by the vector
- Point-wise application of tanh
Each layer streches and squishes space, but it never cuts, breaks, or folds it. Intuitively, we can that it preserves topological properties. - 《Neural Networks, Manifolds, and Topology》
博客作者證明了(and and but not ReLU)的layer具有homeomorphism(同胚)。
但是博客後面的兩個同心圓的例子,我還沒有徹底理解。
toy 2d classification with 2-layer neural network
深層學習爲何要“Deep”
http://colah.github.io/posts/2014-03-NN-Manifolds-Topology/
神經網絡,流形和拓撲
http://playground.tensorflow.org/
CNN
卷積(convolution)
卷積起到了一個特徵提取的作用。
卷積一般有多個。
池化(Pooling)
減少數據量?
防止過擬合?
全連接層(Fully Connected layer)
傳統的NN其實都是全連接層。
CNN的本質是什麼
通過挖掘局部結構的特點減少了訓練的參數個數並提高了訓練效果(存疑)。而局部信息關聯最爲緊密的莫過於圖形了,圖像就是通過像素與像素之間的關係產生了意義。
CNN優勢
參數共享,減少訓練數據量,避免過擬合,平移不變性