TF day 5 神经网络的优化

主要内容:

  • 反向传播backpropagation和梯度下降gradient decent
  • 学习率learning rate的设置:指数衰减法
  • 过拟合问题:正则化regularization

一. 优化算法

反向传播和梯度下降是神经网络的核心,用来调整神经网络中的参数的取值。
具体细节可参考:
- learning representations by back-paopagating errors[M] Rumelhart D E, Hinton G E, Williams R J
- 神经网络python实现

pycharm 提供的优化器,有什么区别以后用到再看~
这里写图片描述

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

接下来两部分就是learning_rate 和 loss 的设置~

二、学习率的设置
随着迭代的继续逐步减小学习率,其原理就是:
decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_step)
参数:
learning_rate:表示初始学习率
decay_rate:是衰减率
global_step:从0开始,每batch一次,增加1

global_step refer to the number of batches seen by the graph. Everytime a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one.
https://stackoverflow.com/questions/41166681/what-does-tensorflow-global-step-mean

decay_step:衰减速度
staircase:设置为True时,global_step/decay_step 会被转化为整数,学习率为阶梯状

##学习率指数衰减
learning_rate = 0.01
global_step = tf.Variable(0,trainable=False)
##阶梯状衰减学习率
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)
##连续状衰减学习率
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)

三、过拟合问题

正则化的原理还是不很理解 - -

##计算正则化损失函数

regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)
regularization = regularizer(w1)+regularizer(w2)

##总损失函数
loss = cross_entroy + regularization

华软的项目:

import tensorflow as tf
import pandas as pd
import numpy as np



#data
df = pd.read_excel('/home/pan-xie/PycharmProjects/ML/jxdc/打分/项目评分表last.xls')
X = df.iloc[:,2:].fillna(df.mean())
# print(X.shape)  ##(449,84)

df_label = df.iloc[:,2]
Y = []
for i in df_label:
    if i =='A':
        Y.append([1,0,0,0])
    elif i =='B':
        Y.append([0,1,0,0])
    elif i =='C':
        Y.append([0,0,1,0])
    else:
        Y.append([0,0,0,1])
# print(np.array(Y).shape)  ##(449,4)

#配置神经网络参数
batch_size = 20
STEPS = 2000
regularizer_rate = 0.001

##定义神经网络的参数
w1 = tf.Variable(tf.random_normal([84,150]))
w2 = tf.Variable(tf.random_normal([150,4]))
biases1 = tf.Variable(tf.zeros(shape=[150]))
biases2 = tf.Variable(tf.zeros(shape=[4]))

##输入层
x = tf.placeholder(tf.float32,shape=[None,84],name='x-input')
y_ = tf.placeholder(tf.float32,shape=[None,4],name='y-input') ##真实值labels

# ##定义神经网络前向传播的过程
a = tf.nn.tanh(tf.matmul(x,w1)+biases1)
y = tf.nn.tanh(tf.matmul(a,w2)+biases2)  ##预测值 logits

###定义损失函数
cross_entroy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y,labels=y_))

##计算正则化损失函数
regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)
regularization = regularizer(w1)+regularizer(w2)

##总损失函数
loss = cross_entroy + regularization

##学习率指数衰减
learning_rate = 0.01
global_step = tf.Variable(0,trainable=False)
# learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)
learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)

##用梯度下降来优化损失函数
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

##准确率计算
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))  ###argmax返回的是索引
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))


data_size = len(df)
with tf.Session() as sess:
    tf.global_variables_initializer().run()
    # print(sess.run(w1))
    # print(sess.run(w2))
    for i in range(STEPS):
        start = (i * batch_size) % data_size
        end = min(start + batch_size, data_size)

        ##通过选取的样本训练神经网络并更新参数
        sess.run(train_step, feed_dict={x: X[start:end+1], y_: Y[start:end+1]})

        if i % 100 == 0:
            accuracy_,loss_ = sess.run([accuracy,cross_entroy],feed_dict={x: X[start:end+1], y_: Y[start:end+1]})
            learning_rate_,global_step_ = sess.run([learning_rate,global_step])
            print("after %d training steps,"" accuracy is %g,""loss is %g"%(i,accuracy_,loss_))
            print("learning rate is %g,""global_step is %g"%(learning_rate_,global_step_))
    # print(sess.run(w1))
    # print(sess.run(w2))

运行结果:

after 0 training steps, accuracy is 0.428571,loss is 1.24586
after 100 training steps, accuracy is 0.952381,loss is 0.437043
after 200 training steps, accuracy is 1,loss is 0.341079
after 300 training steps, accuracy is 1,loss is 0.343718
after 400 training steps, accuracy is 1,loss is 0.340793
after 500 training steps, accuracy is 1,loss is 0.34124
after 600 training steps, accuracy is 1,loss is 0.341603
after 700 training steps, accuracy is 1,loss is 0.34144
after 800 training steps, accuracy is 1,loss is 0.34089
after 900 training steps, accuracy is 1,loss is 0.342235
after 1000 training steps, accuracy is 1,loss is 0.341608
after 1100 training steps, accuracy is 1,loss is 0.340753
after 1200 training steps, accuracy is 1,loss is 0.343047
after 1300 training steps, accuracy is 1,loss is 0.340775
after 1400 training steps, accuracy is 1,loss is 0.341164
after 1500 training steps, accuracy is 1,loss is 0.340755
after 1600 training steps, accuracy is 1,loss is 0.340856
after 1700 training steps, accuracy is 1,loss is 0.341183
after 1800 training steps, accuracy is 1,loss is 0.340931
after 1900 training steps, accuracy is 1,loss is 0.340826
learning rate is 0.00999592,global_step is 1
learning rate is 0.00959608,global_step is 101
learning rate is 0.00921224,global_step is 201
learning rate is 0.00884375,global_step is 301
learning rate is 0.00849,global_step is 401
learning rate is 0.0081504,global_step is 501
learning rate is 0.00782438,global_step is 601
learning rate is 0.00751141,global_step is 701
learning rate is 0.00721095,global_step is 801
learning rate is 0.00692251,global_step is 901
learning rate is 0.00664561,global_step is 1001
learning rate is 0.00637979,global_step is 1101
learning rate is 0.0061246,global_step is 1201
learning rate is 0.00587961,global_step is 1301
learning rate is 0.00564443,global_step is 1401
learning rate is 0.00541865,global_step is 1501
learning rate is 0.0052019,global_step is 1601
learning rate is 0.00499383,global_step is 1701
learning rate is 0.00479407,global_step is 1801
learning rate is 0.00460231,global_step is 1901
发布了52 篇原创文章 · 获赞 5 · 访问量 2万+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章