【Julia Deep Learning CV】 第一篇 MNIST

                                                                                                                                                               工欲善其事必先有其器

                                                                                                                                                    我們先還是老老實實做調包俠吧

1、安裝必要的包:

        Pkg.add("Statistics")

        Pkg.add("Flux")

2、先上代碼:

using Statistics
using Flux: onehotbatch, onecold, crossentropy, throttle
using Base.Iterators: repeated
imgs = Flux.Data.MNIST.images()
labels = Flux.Data.MNIST.labels();

 #以上代碼就是調用包,加載imgs和labels

imgs[27454]

 

labels[27454] 
# 7
X = hcat(float.(reshape.(imgs, :))...) #MNIST圖片是28 * 28 (長, 寬)二維,變成成一維
Y = onehotbatch(labels, 0:9) #對Y進行編碼

model = Chain( Dense(28^2, 32, relu), Dense(32, 10), softmax) #神經網絡一共三層:第一層,28^2  ->  32;第二層,32 -                                                                                                               #>10; 第三層,softmax

#以下是優化過程

loss(x, y) = crossentropy(m(x), y) 
opt = ADAM(); 

accuracy(x, y) = mean(onecold(m(x)) .== onecold(y)) 

dataset = repeated((X,Y),200) 
evalcb = () -> @show(loss(X, Y)) 

#可以開始訓練了

Flux.train!(loss, params(m), dataset, opt, cb = throttle(evalcb, 10));

#訓練結果如下:

loss(X, Y) = 2.3259583f0 (tracked)
loss(X, Y) = 1.6830894f0 (tracked)
loss(X, Y) = 1.1227762f0 (tracked)
loss(X, Y) = 0.7927527f0 (tracked)
loss(X, Y) = 0.6152953f0 (tracked)
loss(X, Y) = 0.51356655f0 (tracked)
loss(X, Y) = 0.44959342f0 (tracked)
loss(X, Y) = 0.4059622f0 (tracked)
loss(X, Y) = 0.3741082f0 (tracked)
loss(X, Y) = 0.3512681f0 (tracked)
loss(X, Y) = 0.33128205f0 (tracked)
loss(X, Y) = 0.31474704f0 (tracked)
loss(X, Y) = 0.3016968f0 (tracked)
loss(X, Y) = 0.28936785f0 (tracked)
loss(X, Y) = 0.27849576f0 (tracked)
loss(X, Y) = 0.2688136f0 (tracked)
......
accuracy(X, Y) #訓練集上的結果
#0.92735

test_X = hcat(float.(reshape.(Flux.Data.MNIST.images(:test), :))...)
test_Y = onehotbatch(Flux.Data.MNIST.labels(:test), 0:9);
model(test_X[:,5287])
accuracy(test_X, test_Y) #測試集上的結果
#0.924

代碼比較簡單,所用到的mlp也很簡單,loss用的是交叉熵,梯度用的是ADAM。

第二篇我們還是用這些擴展一下,總不能一老做調包俠吧

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章