1.4.4 隨機梯度下降和構建機器學習算法(介紹)

隨機梯度下降(stochastic gradient descent SGD)

SGD是梯度下降算法的一個擴展

機器學習中反覆出現的一個問題是好的泛化需要很大的訓練集,但大的訓練集的計算代價也很大。

機器學習中算法中的代價函數通常可以分解成每個樣本代價函數的總和。 例如,訓練數據的負條件對數似然可以寫成

J(θ)=Ex,y pL(x,y,θ)=1mL(x(x),y(i),θ)J(\theta) = E_{x,y~p}L(x, y, \theta) = \frac{1}{m} \sum L(x^{(x)}, y^{(i)}, \theta)

其中L是每個樣本的損失L(x,y,θ)=logp(yx;θ)L(x, y, \theta) = -log p(y|x; \theta)

這樣計算梯度的代價是O(m),計算量太大。

隨機梯度下降的核心是,梯度是期望。期望可使用小規模的樣本近似估計。

具體而言,在算法的每一步,我們從訓練集中均勻抽取一小批量(minibatch)樣本,每次更新計算只用到這些樣本。

構建機器學習算法

幾乎所有的深度學習算法都可以被描述爲一個相當簡單配方: 特定的數據集–>代價函數–>優化過程–>模型

from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

# 1----特定的數據集
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)

def add_layer(inputs, in_size, out_size, activation_function=None):
    # add one layer more and return the output of this layer
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))
    biases = tf.Variable(tf.zeros([1,out_size])+0.1)
    Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
    if activation_function is None:
        outputs = Wx_plus_b
    else:
        outputs = activation_function(Wx_plus_b)
    return outputs


def compute_accuracy(v_xs, v_ys):
    global prediciton
    y_pre = sess.run(prediciton, feed_dict={xs:v_xs})
    correct_prediction = tf.equal(tf.arg_max(y_pre,1), tf.argmax(v_ys,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    result = sess.run(accuracy, feed_dict={xs:v_xs, ys:v_ys})
    return result

# define placehoder for the inputs to network
xs = tf.placeholder(tf.float32,[None,784])  #28*28
ys = tf.placeholder(tf.float32,[None,10])

#add output layer
prediciton = add_layer(xs, 784,10,activation_function=tf.nn.softmax)  #softmax

#the loss between predicition and real data
# 2------代價函數
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys*tf.log(prediciton), reduction_indices=[1]))
# 使用梯度下降算法
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

# 3-----訓練過程
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    for i in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        sess.run(train_step, feed_dict={xs:batch_xs, ys:batch_ys})
        if i % 50 == 0:
            print(compute_accuracy(mnist.test.images, mnist.test.labels))
D:\Anaconda\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters


WARNING:tensorflow:From <ipython-input-1-74fedf718cc1>:5: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py:252: _internal_retry.<locals>.wrap.<locals>.wrapped_fn (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please use urllib or similar directly.
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data\train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting MNIST_data\train-labels-idx1-ubyte.gz
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data\t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data\t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From D:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From <ipython-input-1-74fedf718cc1>:23: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `argmax` instead
0.072
0.6446
0.7387
0.7757
0.8001
0.813
0.829
0.8384
0.844
0.8396
0.852
0.8574
0.857
0.8624
0.8627
0.8649
0.8675
0.8696
0.8695
0.8751
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章