本來想根據機器學習書上的推導的梯度下降公式手寫一個梯度下降,但是效果不好,後面用了tf自帶的梯度下降後成功。
發現方差作爲代價函數不能在MNIST數據集上取得良好效果,訓練正確率始終在0.1上下徘徊。
MNIST上的優化函數得使用交叉熵,隱含節點個數有幾個經驗公式,大概取了一個附近的值,下面先給出MNIST上的代碼。
import numpy
import input_data
import tensorflow as tf
mnist = input_data.read_data_sets("MNIST_data/",one_hot=True)
input_size = 784
out_size = 10
hide_note = 60
x = tf.placeholder("float", [None, input_size])
y = tf.placeholder("float", [None, out_size])
v = tf.Variable(tf.random_normal([input_size, hide_note], stddev=0.1))
b = tf.Variable(tf.zeros([hide_note])+0.1)
w = tf.Variable(tf.random_normal([hide_note, out_size], stddev=0.1))
sita = tf.Variable(tf.zeros([out_size])+0.1)
a = tf.matmul(x,v)+b
a = tf.nn.relu(a)
y_ = tf.matmul(a, w)+sita
y_ = tf.nn.relu(y_)
loss = tf.reduce_mean (tf.nn.softmax_cross_entropy_with_logits (labels = y, logits = y_))
opt = tf.train.GradientDescentOptimizer (0.05).minimize (loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(5000):
batch_xs, batch_ys = mnist.train.next_batch(100)
test,new_loss=sess.run([opt,loss],feed_dict={x: batch_xs, y: batch_ys})
accuracy = 0
for i in range(1000):
batch_xs, batch_ys = mnist.test.next_batch(1)
indexs = sess.run(y_, feed_dict={x: batch_xs})
answer=numpy.argmax(indexs)
# print(indexs)
print("test:", i, "預測類別:", answer, "真實類別:", numpy.argmax(batch_ys))
if answer==numpy.argmax(batch_ys):
accuracy=accuracy+1
print(accuracy/1000)
注意初始化的時候不要全爲0,節點的輸出結果用激活函數映射一下,學習率是0.05,一般推薦學習率不大於0.1,在調試過程中學習到了一些tensorflow矩陣運算函數的使用和注意點,這裏需要提一下的就是tf中向量(一維數組)是不能直接和矩陣進行乘運算的可以用reshape或者先點乘再累加和的方法。
機器學習 5.5課後習題,代價函數用方差,注意下離散值的轉換,這裏轉換成向量,即對於某離散屬性若有三個可取值則轉化爲一個三維單位向量,在值的那一維取1其餘爲0,因爲西瓜集的例子不多所以我這裏手動處理了一下,最後一列是label。
0.697 0.460 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.774 0.376 0 1 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0.634 0.264 0 1 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.608 0.318 1 0 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0.556 0.215 0 0 1 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0.403 0.237 1 0 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 1 0.481 0.149 0 1 0 0 0 1 1 0 0 0 1 0 0 1 0 0 1 1 0.437 0.211 0 1 0 0 0 1 1 0 0 1 0 0 0 1 0 1 0 1 0.666 0.091 0 1 0 0 0 1 0 1 0 0 1 0 0 1 0 1 0 0 0.243 0.267 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 0 1 0 0.245 0.057 0 0 1 0 1 0 0 0 1 0 0 1 0 0 1 1 0 0 0.343 0.099 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 0 1 0 0.639 0.161 1 0 0 0 0 1 1 0 0 0 1 0 1 0 0 1 0 0 0.657 0.198 0 0 1 0 0 1 0 1 0 0 1 0 1 0 0 1 0 0 0.360 0.370 0 1 0 0 0 1 1 0 0 1 0 0 0 1 0 0 1 0 0.593 0.042 0 0 1 1 0 0 1 0 0 0 0 1 0 0 1 1 0 0 0.719 0.103 1 0 0 1 0 0 0 1 0 0 1 0 0 1 0 1 0 0
import numpy
import tensorflow as tf
test_data = numpy.loadtxt("test_data.txt")
train_data=test_data[:, 0:19]
train_label=test_data[:, 19:]
input_size = 19
out_size = 1
hide_note = 8
x = tf.placeholder("float", [None, input_size])
y = tf.placeholder("float", [None, out_size])
v = tf.Variable(tf.random_normal([input_size, hide_note], stddev=0.1))
b = tf.Variable(tf.zeros([hide_note])+0.1)
w = tf.Variable(tf.random_normal([hide_note, out_size], stddev=0.1))
sita = tf.Variable(tf.zeros([out_size])+0.1)
a = tf.matmul(x,v)+b
a = tf.nn.relu(a)
y_ = tf.matmul(a, w)+sita
y_ = tf.nn.relu(y_)
loss = 0.5*tf.reduce_sum((y_-y)*(y_-y), [0,1])
opt = tf.train.GradientDescentOptimizer (0.05).minimize (loss)
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
for i in range(1000): # 迭代1000次
test,new_loss=sess.run([opt,loss],feed_dict={x: train_data, y: train_label})
indexs = sess.run(y_, feed_dict={x: train_data})
print("test:", i, "預測類別:", indexs, "真實類別:", train_label)
最後結果可以自己對照一下,在訓練集上準確率達到100%,這個是上面那個程序改的,在改的過程中,確實學到了很多,後面將會研究一下自己寫的梯度下降錯在那裏,下次手動實現在西瓜集上的梯度下降。