TF day 5 神經網絡的優化

主要內容:

  • 反向傳播backpropagation和梯度下降gradient decent
  • 學習率learning rate的設置:指數衰減法
  • 過擬合問題:正則化regularization

一. 優化算法

反向傳播和梯度下降是神經網絡的核心,用來調整神經網絡中的參數的取值。
具體細節可參考:
- learning representations by back-paopagating errors[M] Rumelhart D E, Hinton G E, Williams R J
- 神經網絡python實現

pycharm 提供的優化器,有什麼區別以後用到再看~
這裏寫圖片描述

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

接下來兩部分就是learning_rate 和 loss 的設置~

二、學習率的設置
隨着迭代的繼續逐步減小學習率,其原理就是:
decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_step)
參數:
learning_rate:表示初始學習率
decay_rate:是衰減率
global_step:從0開始,每batch一次,增加1

global_step refer to the number of batches seen by the graph. Everytime a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one.
https://stackoverflow.com/questions/41166681/what-does-tensorflow-global-step-mean

decay_step:衰減速度
staircase:設置爲True時,global_step/decay_step 會被轉化爲整數,學習率爲階梯狀

##學習率指數衰減
learning_rate = 0.01
global_step = tf.Variable(0,trainable=False)
##階梯狀衰減學習率
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)
##連續狀衰減學習率
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)

三、過擬合問題

正則化的原理還是不很理解 - -

##計算正則化損失函數

regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)
regularization = regularizer(w1)+regularizer(w2)

##總損失函數
loss = cross_entroy + regularization

華軟的項目:

import tensorflow as tf
import pandas as pd
import numpy as np



#data
df = pd.read_excel('/home/pan-xie/PycharmProjects/ML/jxdc/打分/項目評分表last.xls')
X = df.iloc[:,2:].fillna(df.mean())
# print(X.shape)  ##(449,84)

df_label = df.iloc[:,2]
Y = []
for i in df_label:
    if i =='A':
        Y.append([1,0,0,0])
    elif i =='B':
        Y.append([0,1,0,0])
    elif i =='C':
        Y.append([0,0,1,0])
    else:
        Y.append([0,0,0,1])
# print(np.array(Y).shape)  ##(449,4)

#配置神經網絡參數
batch_size = 20
STEPS = 2000
regularizer_rate = 0.001

##定義神經網絡的參數
w1 = tf.Variable(tf.random_normal([84,150]))
w2 = tf.Variable(tf.random_normal([150,4]))
biases1 = tf.Variable(tf.zeros(shape=[150]))
biases2 = tf.Variable(tf.zeros(shape=[4]))

##輸入層
x = tf.placeholder(tf.float32,shape=[None,84],name='x-input')
y_ = tf.placeholder(tf.float32,shape=[None,4],name='y-input') ##真實值labels

# ##定義神經網絡前向傳播的過程
a = tf.nn.tanh(tf.matmul(x,w1)+biases1)
y = tf.nn.tanh(tf.matmul(a,w2)+biases2)  ##預測值 logits

###定義損失函數
cross_entroy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y,labels=y_))

##計算正則化損失函數
regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)
regularization = regularizer(w1)+regularizer(w2)

##總損失函數
loss = cross_entroy + regularization

##學習率指數衰減
learning_rate = 0.01
global_step = tf.Variable(0,trainable=False)
# learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)

##用梯度下降來優化損失函數
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

##準確率計算
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))  ###argmax返回的是索引
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))


data_size = len(df)
with tf.Session() as sess:
    tf.global_variables_initializer().run()
    # print(sess.run(w1))
    # print(sess.run(w2))
    for i in range(STEPS):
        start = (i * batch_size) % data_size
        end = min(start + batch_size, data_size)

        ##通過選取的樣本訓練神經網絡並更新參數
        sess.run(train_step, feed_dict={x: X[start:end+1], y_: Y[start:end+1]})

        if i % 100 == 0:
            accuracy_,loss_ = sess.run([accuracy,cross_entroy],feed_dict={x: X[start:end+1], y_: Y[start:end+1]})
            learning_rate_,global_step_ = sess.run([learning_rate,global_step])
            print("after %d training steps,"" accuracy is %g,""loss is %g"%(i,accuracy_,loss_))
            print("learning rate is %g,""global_step is %g"%(learning_rate_,global_step_))
    # print(sess.run(w1))
    # print(sess.run(w2))

運行結果:

after 0 training steps, accuracy is 0.428571,loss is 1.24586
after 100 training steps, accuracy is 0.952381,loss is 0.437043
after 200 training steps, accuracy is 1,loss is 0.341079
after 300 training steps, accuracy is 1,loss is 0.343718
after 400 training steps, accuracy is 1,loss is 0.340793
after 500 training steps, accuracy is 1,loss is 0.34124
after 600 training steps, accuracy is 1,loss is 0.341603
after 700 training steps, accuracy is 1,loss is 0.34144
after 800 training steps, accuracy is 1,loss is 0.34089
after 900 training steps, accuracy is 1,loss is 0.342235
after 1000 training steps, accuracy is 1,loss is 0.341608
after 1100 training steps, accuracy is 1,loss is 0.340753
after 1200 training steps, accuracy is 1,loss is 0.343047
after 1300 training steps, accuracy is 1,loss is 0.340775
after 1400 training steps, accuracy is 1,loss is 0.341164
after 1500 training steps, accuracy is 1,loss is 0.340755
after 1600 training steps, accuracy is 1,loss is 0.340856
after 1700 training steps, accuracy is 1,loss is 0.341183
after 1800 training steps, accuracy is 1,loss is 0.340931
after 1900 training steps, accuracy is 1,loss is 0.340826
learning rate is 0.00999592,global_step is 1
learning rate is 0.00959608,global_step is 101
learning rate is 0.00921224,global_step is 201
learning rate is 0.00884375,global_step is 301
learning rate is 0.00849,global_step is 401
learning rate is 0.0081504,global_step is 501
learning rate is 0.00782438,global_step is 601
learning rate is 0.00751141,global_step is 701
learning rate is 0.00721095,global_step is 801
learning rate is 0.00692251,global_step is 901
learning rate is 0.00664561,global_step is 1001
learning rate is 0.00637979,global_step is 1101
learning rate is 0.0061246,global_step is 1201
learning rate is 0.00587961,global_step is 1301
learning rate is 0.00564443,global_step is 1401
learning rate is 0.00541865,global_step is 1501
learning rate is 0.0052019,global_step is 1601
learning rate is 0.00499383,global_step is 1701
learning rate is 0.00479407,global_step is 1801
learning rate is 0.00460231,global_step is 1901
發佈了52 篇原創文章 · 獲贊 5 · 訪問量 2萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章