TF-day6 CNN簡單分類

主要內容:

  • 何爲CNN
  • code

一、何爲CNN
圖解何爲CNN
http://www.jianshu.com/p/6daa1af1cf37

深入理解:
http://study.163.com/course/courseMain.htm?courseId=1003223001

二、代碼及解析

import tensorflow as tf
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from imblearn.over_sampling import SMOTE
from sklearn.metrics import classification_report

1. 獲取數據

##獲取數據
def get_data(argv=None):
    df = pd.read_excel('/home/xp/下載/jxdc/項目評分表last.xls')
    df_feature = df.iloc[:, 2:].fillna(df.mean())
    # print(df_feature.shape)  ##(449,70)
    df_label = df.iloc[:, 1]

    ##樣本均衡
    smote = SMOTE('auto')
    x_sample, y_sample = smote.fit_sample(df_feature, df_label)
    # print(x_sample.shape)    ##(690,70)
    # print(y_sample.shape)    ##(690,)

    ##轉換爲one-hot向量
    X = x_sample  #均衡後的輸入
    # X = df_feature  #不均衡的輸入
    Y = []
    for i in y_sample:
    # for i in df_label:
        if i == 'A':
            Y.append([1, 0, 0, 0])
        elif i == 'B':
            Y.append([0, 1, 0, 0])
        elif i == 'C':
            Y.append([0, 0, 1, 0])
        else:
            Y.append([0, 0, 0, 1])
    # train(X,Y)
    return X,Y
  1. fillna(df.mean())數據中有空值,採用每一行的平均值進行填充。
  2. 樣本均衡:因爲數據中A類數據偏少,D類數據很多,smote方法是讓數據最少的一類生成一定的數據,使得其數據量與數據最多的一類一樣多。
    https://pypi.python.org/pypi/imbalanced-learn
    smote具體原理參考:
    http://blog.csdn.net/Yaphat/article/details/52463304?locationNum=7
    http://blog.csdn.net/yaphat/article/details/60347968
    主要思想:對少數類樣本進行分析並根據少數類樣本人工合成新樣本添加到數據集中。
  3. one-hot向量:
    爲什麼要將樣本標籤轉換爲one-hot向量呢,因爲在訓練神經網絡就是讓損失函數變小,其中交叉熵tf.nn.softmax_cross_entropy_with_logits的輸入是兩個概率分佈,one-hot向量可以看作是一個概率分佈。

2.前向傳播

##前向傳播
INPUT_NODE = 70
OUTPUT_NODE = 4

IMAGE_LONGTH = 70
IMAGE_WIDTH = 1
NUM_CHANNELS = 1
NUM_LABELS = 4

##第一層卷積層的尺寸和深度
CONV1_DEEP = 16
CONV1_SIZE_L = 4
CONV1_SIZE_W = 1

##第二層卷積層的尺寸和深度
CONV2_DEEP = 32
CONV2_SIZE_L = 2
CONV2_SIZE_W = 1

##全連接層的節點個數
FC_SIZE = 128

def inference(input_tensor,train,regularizer):
    ###通過使用不同的命名空間來隔離不同層的變量。不需要擔心重名的問題
    ##第一層:卷積層1
    with tf.variable_scope("layer1-conv1"):
        conv1_weights = tf.get_variable("weight",[CONV1_SIZE_L,CONV1_SIZE_W,NUM_CHANNELS,CONV1_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv1_biases = tf.get_variable("bias",[CONV1_DEEP],initializer=tf.constant_initializer(0.1))
    conv1 = tf.nn.conv2d(input_tensor,conv1_weights,strides=[1,2,1,1],padding='SAME')
    relu1 = tf.nn.relu(tf.nn.bias_add(conv1,conv1_biases))

卷積層:對應節點的加權和
1. 採用tf.variable_scope()進行變量管理,因爲神經網絡變量太多了,這樣就不用擔心命名很容易重複了。
2. 過濾器:conv1_weight,Tensor(“layer1-conv1/weight:0”, shape=(4, 1, 1, 16), dtype=float32_ref)
name:’layer1-conv1/weight:0’
shape是四維,第一、二維是過濾器尺寸,第三維表示當前層的深度,第四維表示過濾器的深度。
3. strides: 不同維度上的步長,第一維和最後一維只能是1,因爲卷積層的步長只對長和寬有效。
4. padding:’SAME’表示全添加0填充,’VALID’表示不


    ##第二層:池化層
    with tf.name_scope("layer2-pool1"):
        pool1 = tf.nn.max_pool(relu1,ksize=[1,3,1,1],strides=[1,2,1,1],padding="SAME")
        # pool1 = tf.nn.avg_pool(relu1,ksize=[1,3,1,1],strides=[1,2,1,1],padding="SAME")

池化層:縮小矩陣的尺寸,從而減小最後全鏈接層的參數/既可以加快計算速度,也有防止過擬合的作用。
1. 最大池化層:max pooling 、平均池化層:average pooling
2. ksize:第一維和最後一維只能是1,這意味着池化層的過濾器是不可以跨樣本和跨節點深度的。


    ##第三層:卷積層
    with tf.variable_scope("layer3-conv2"):
        conv2_weights = tf.get_variable("weight",[CONV2_SIZE_L,CONV2_SIZE_W,CONV1_DEEP,CONV2_DEEP],initializer=tf.truncated_normal_initializer(stddev=0.1))
        conv2_biases = tf.get_variable("bias",[CONV2_DEEP],initializer=tf.constant_initializer(0.1))
    conv2 = tf.nn.conv2d(pool1,conv2_weights,strides=[1,2,1,1],padding='SAME')
    relu2 = tf.nn.relu(tf.nn.bias_add(conv2,conv2_biases))

    ##第四層:池化層
    with tf.name_scope('layer4-pool2'):
        pool2 = tf.nn.max_pool(relu2,ksize=[1,2,1,1],strides=[1,3,1,1],padding='SAME')

    ##第五層:全連接層
    pool_shape = pool2.get_shape().as_list()
    ##計算講矩陣拉直之後的向量長度,pool_shape[0]爲batch中的數據個數
    nodes = pool_shape[1]*pool_shape[2]*pool_shape[3]    # nodes:96
    reshaped = tf.reshape(pool2,[-1,nodes])   ###BATCH_SIZE必須是已知的值???

    with tf.variable_scope('layers-fc1'):
        fc1_weights = tf.get_variable("weight",[nodes,FC_SIZE],initializer=tf.truncated_normal_initializer(stddev=0.1))
        if regularizer != None:                
            tf.add_to_collection("losses",regularizer(fc1_weights))
        fc1_biases = tf.get_variable("bias",[FC_SIZE],initializer=tf.constant_initializer(0.1))

        fc1 = tf.nn.relu(tf.matmul(reshaped,fc1_weights) + fc1_biases)
        if train:
            fc1 = tf.nn.dropout(fc1,0.5)  

全連接層:
1. 先將第四層池化層的輸出pool2轉換爲一維向量,reshaped = tf.reshape(pool2,[-1,nodes]) 因爲樣本數量會變化,用-1來代替
2. 正則化項
3. dropout:
http://blog.csdn.net/stdcoutzyx/article/details/49022443
https://yq.aliyun.com/articles/68901
http://www.cnblogs.com/tornadomeet/p/3258122.html


    ##第六層:softmax層
    with tf.variable_scope("layer6-fc2"):
        fc2_weights = tf.get_variable('weight',[FC_SIZE,NUM_LABELS],initializer=tf.truncated_normal_initializer(stddev=0.1))
        if regularizer != None:
            tf.add_to_collection('losses',regularizer(fc2_weights))
        fc2_biases = tf.get_variable('bias',[NUM_LABELS],initializer=tf.constant_initializer(0.1))
        logit = tf.matmul(fc1,fc2_weights) +fc2_biases
    return logit

softmax層:
1. 這一層與全連接層全不過少了dropout

3. 訓練數據

BATCH_SIZE = 100
LEARNING_RATE_BASE = 0.8
LEARNING_RATE_DECAY = 0.96
REGULARAZTION_RATE = 0.001
TRAINING_STEPS = 4000
MOVING_AVERAGE_DECAY = 0.99

##模型保存路徑和文件名
MODEL_SAVE_PATH = "/home/panxie/PycharmProjects/ML/jxdc/code/cnnclassify/model.ckpt"
# MODEL_NAME = "model.ckpt"

##訓練神經網絡
def train(X,Y):
    x = tf.placeholder(tf.float32, [None,IMAGE_LONGTH, IMAGE_WIDTH,NUM_CHANNELS],name='x_input')
    y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input')

    regularizer = tf.contrib.layers.l2_regularizer(REGULARAZTION_RATE)

    y = inference(x,0.5,regularizer)
    ##這裏y*1,方便給y命名
    b = tf.constant(value=1, dtype=tf.float32)
    y = tf.multiply(y, b, name='y')

    ##定義訓練輪數,並指定爲不可訓練的參數
    global_step = tf.Variable(0, trainable=False)
    variable_average = tf.train.ExponentialMovingAverage(MOVING_AVERAGE_DECAY, global_step)
    variable_average_op = variable_average.apply(tf.trainable_variables())

    ##交叉熵和正則化項
    cross_entroy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_))
    loss = cross_entroy + tf.add_n(tf.get_collection('losses'))

    ##學習率的設置,指數衰減法
    learning_rate = 0.01
    # learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.9,staircase=True)
    # learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=1,decay_rate=0.96,staircase=False)

    ###優化算法
    train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss, global_step=global_step)

    ###每過一邊數據要更新神經網絡的參數,又要更新每一個參數的滑動平均值。
    with tf.control_dependencies([train_step,variable_average_op]):
        train_op = tf.no_op(name='train')

    ##準確率計算
    correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))  ###argmax返回的是索引
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

    ##初始化Tensorflow持久化類
    saver = tf.train.Saver()
    # saver.export_meta_graph("/home/pan-xie/PycharmProjects/ML/jxdc/code/cnnclassify/model.deda.json",as_text=True)
    with tf.Session() as sess:
        tf.global_variables_initializer().run()

        for i in range(TRAINING_STEPS):

            trainX, validationX, trainY, validationY = train_test_split(X, Y, test_size=0.25, random_state=0)

            data_size = len(trainX)
            start = (i * BATCH_SIZE) % data_size
            end = min(start + BATCH_SIZE,data_size)
            # if (end < data_size):
            #     xs = trainX[start:end]
            #     ys = trainY[start:end]
            # else:
            #     xs = [trainX.tolist()[start:end].append(j) for j in trainX[0:(end-data_size)]]
            #     ys = [trainY[start:end].append(j) for j in trainY[0:(end-data_size)]]

            ###訓練數據
            xs_train_reshape = np.reshape(trainX[start:end], (-1, IMAGE_LONGTH, IMAGE_WIDTH, NUM_CHANNELS))
            train_feed = {x:xs_train_reshape,y_:trainY[start:end]}

            ##驗證數據集
            xs_valid_reshape = np.reshape(validationX, (-1, IMAGE_LONGTH, IMAGE_WIDTH, NUM_CHANNELS))
            validiation_feed = {x: xs_valid_reshape, y_: validationY}

            _,loss_value,step,accuracy_train = sess.run([train_op,loss,global_step,accuracy],feed_dict=train_feed)

            loss_valid,acc_valid = sess.run([loss,accuracy],feed_dict=validiation_feed)

            # if i %500 == 0:
            #     print("After %d training steps,""loss and accuracy on training is %g and %g,""loss and accuracy on validiation is %g and %g"%(step,loss_value,accuracy_train,loss_valid,acc_valid))
                # saver.save(sess,os.path.join(MODEL_SAVE_PATH,MODEL_NAME),global_step=global_step)

        trainX, testX, trainY, testY = train_test_split(X, Y, test_size=0.25, random_state=0)
        xs_test_reshape = np.reshape(testX, (-1, IMAGE_LONGTH, IMAGE_WIDTH, NUM_CHANNELS))
        test_feed = {x: xs_test_reshape, y_: testY}

        # 使用測試數據集,查看分類指標
        target_names = ['A', 'B', 'C', 'D']
        rating_test_ = sess.run(tf.argmax(y_, 1), feed_dict=test_feed)
        test_preds1 = sess.run(tf.argmax(y, 1), feed_dict=test_feed)
        print(classification_report(rating_test_, test_preds1, target_names=target_names))

        saver.save(sess, MODEL_SAVE_PATH)
        # saver.save(sess,os.path.join(MODEL_SAVE_PATH,MODEL_NAME),global_step=global_step)
        sess.close()

神經網絡的訓練:
1. from sklearn.cross_validation import train_test_split 將數據隨機分爲兩類
關於更多交叉驗證的參考:http://scikit-learn.org/stable/modules/cross_validation.html#cross-validation
2. from sklearn.metrics import classification_report
關於模型評估,量化預測質量參考: http://scikit-learn.org/stable/modules/model_evaluation.html#classification-report
https://blog.argcv.com/articles/1036.c

         precision    recall  f1-score   support

      A       0.97      0.97      0.97        70
      B       0.73      0.62      0.67        13
      C       0.62      0.45      0.53        22
      D       0.86      0.96      0.90        68

avg / total       0.86      0.87      0.86       173

以A爲例:
其中precision表示精確率,測試集中被檢索到的數據中分類正確的樣本數/測試集中被檢索總樣本數
recall表示召回率,測試集中被檢索到的數據中分類正確的樣本數/測試集所有的A類
F1-score值就是精確值和召回率的調和均值,也就是2/F1=1/P+1/R

if __name__=='__main__':
    X,Y = get_data()
    train(X,Y)
    # tf.app.run()

github: http://www.cnblogs.com/schaepher/p/5561193.html
完整代碼:https://github.com/PanXiebit/CNN_Classify

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章