機器學習作業

基於WIFI信號強度的室內定位

給定的數據集:

給定2000行八列的數據,前七列爲數據data,也就是七組不同的wifi信號;後一列爲標籤label,也就是1-4的房間號。

算法設計:

問題本身是一個監督學習下的分類問題。
設計實現一個提取數據的函數,分割出數據和標籤。然後設計一個提取權重的輔助函數,用於設計實現一個三層的全連接神經網絡進行數據的處理操作。

具體代碼實現:

from numpy import *
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split

'''定義讀取文件的函數'''
def loadDataSet(fileName):
    dataMat = []; labelMat = []
    fr = open(fileName)
    for line in fr.readlines():
        lineArr = line.strip().split('\t')
        dataMat.append([int(lineArr[0]), int(lineArr[1]), int(lineArr[2]), int(lineArr[3]), int(lineArr[4]), int(lineArr[5]), int(lineArr[6])])
        labelMat.append(int(lineArr[7]))
        data = np.array(dataMat)
        label = np.array(labelMat)
    return data, label
'''讀取數據並劃分訓練集和測試集,劃分標籤和數據'''
dataArr,labelArr = loadDataSet('/home/wangjiaer/class_machine_learning/HW1-WirelessIndoorLocalization/wifi_localization.txt')
train_data, test_data, train_label, test_label = train_test_split(dataArr, labelArr, test_size=0.1,random_state=3)

'''定義預設參數'''
INPUT_NODE = 7
OUTPUT_NODE = 4
LAYER1_NODE = 10
BATCH_SIZE = 40
REGULARIZATION_RATE = 0.0001
DATASIZE = 1800
LEARNING_RATE = 0.001
EPOCH = 40

'''定義一個獲取權重的函數'''
def get_weight_variable(shape, regularizer):
    weights = tf.get_variable("weights", shape, initializer=tf.truncated_normal_initializer(stddev=0.1))
    if regularizer != None: tf.add_to_collection('losses', regularizer(weights))
    return weights

'''定義一個三層全連接網絡'''
def inference(input_tensor,regularizer):
    with tf.variable_scope('layer1'):
        weights = get_weight_variable([INPUT_NODE, LAYER1_NODE], regularizer)
        biases = tf.get_variable("biases", [LAYER1_NODE], initializer=tf.constant_initializer(0.0))
        layer1 = tf.nn.relu(tf.matmul(input_tensor, weights) + biases)
    with tf.variable_scope('layer2'):
        weights = get_weight_variable([LAYER1_NODE, OUTPUT_NODE], regularizer)
        biases = tf.get_variable("biases", [OUTPUT_NODE], initializer=tf.constant_initializer(0.0))
        layer2 = tf.matmul(layer1, weights) + biases
    return layer2

'''定義訓練函數'''
x = tf.placeholder(tf.float32, [None, INPUT_NODE],name='x-input')
y_ = tf.placeholder(tf.int32, [None],name='y-input')
y_onehot = tf.one_hot(y_ - 1, 4)
# 計算L2正則化損失函數
regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE)
y = inference(x,regularizer)
#使用交叉熵作爲損失函數之一,並計算出平均值
cross_entropy_mean = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_onehot))
loss = cross_entropy_mean+tf.add_n(tf.get_collection('losses'))
tf.summary.scalar('loss', loss)
#優化算法tf.train.AdamOptimizer對損失函數進行優化
train_step = tf.train.AdamOptimizer(LEARNING_RATE).minimize(loss)
#檢驗結果是否正確
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_onehot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
tf.summary.scalar('validate_acc', accuracy)

#初始化對話並開始訓練
with tf.Session() as sess:
    init_op=tf.global_variables_initializer()
    sess.run(init_op)
    merged = tf.summary.merge_all()
    writer = tf.summary.FileWriter("graph", sess.graph)

    for i in range(EPOCH):
        for j in range(DATASIZE // BATCH_SIZE):
            batch_x = train_data[i * BATCH_SIZE:BATCH_SIZE*(i + 1)]
            batch_y = train_label[i * BATCH_SIZE:BATCH_SIZE*(i + 1)]
            sess.run(train_step, feed_dict={x: batch_x, y_: batch_y})
            opy, summary = sess.run([train_step, merged], feed_dict={x: batch_x, y_: batch_y})
            writer.add_summary(summary, i*45 + j)
            if j == 0:
                print('Epoch:%d' % (i))
            if j % 5 == 0:
                trainloss = sess.run(loss, feed_dict={x: batch_x, y_: batch_y})
                print('Epoch:%d, Iter:%d, loss:%f' %(i, j,trainloss))

        acc = sess.run(accuracy, feed_dict={x: test_data, y_: test_label})
        print('After %d epoch,test accuracy is %f' % (i, acc))

打印輸出:

Epoch:0
Epoch:0, Iter:0, loss:2.037908
Epoch:0, Iter:5, loss:1.427482
Epoch:0, Iter:10, loss:1.316093
Epoch:0, Iter:15, loss:1.257610
Epoch:0, Iter:20, loss:1.215235
Epoch:0, Iter:25, loss:1.181677
Epoch:0, Iter:30, loss:1.147582
Epoch:0, Iter:35, loss:1.111368
Epoch:0, Iter:40, loss:1.073224
After 0 epoch,test accuracy is 0.500000
Epoch:1
Epoch:1, Iter:0, loss:1.110317
Epoch:1, Iter:5, loss:1.062317
Epoch:1, Iter:10, loss:1.024702
Epoch:1, Iter:15, loss:0.986477
Epoch:1, Iter:20, loss:0.949597
Epoch:1, Iter:25, loss:0.912772
Epoch:1, Iter:30, loss:0.876088
Epoch:1, Iter:35, loss:0.839557
Epoch:1, Iter:40, loss:0.803097
After 1 epoch,test accuracy is 0.915000
Epoch:2
Epoch:2, Iter:0, loss:0.747965
Epoch:2, Iter:5, loss:0.703533
Epoch:2, Iter:10, loss:0.661589
Epoch:2, Iter:15, loss:0.622188
Epoch:2, Iter:20, loss:0.586072
Epoch:2, Iter:25, loss:0.552943
Epoch:2, Iter:30, loss:0.522533
Epoch:2, Iter:35, loss:0.494555
Epoch:2, Iter:40, loss:0.468805
After 2 epoch,test accuracy is 0.960000
Epoch:3
Epoch:3, Iter:0, loss:0.455281
Epoch:3, Iter:5, loss:0.434474
Epoch:3, Iter:10, loss:0.414433
Epoch:3, Iter:15, loss:0.396565
Epoch:3, Iter:20, loss:0.380466
Epoch:3, Iter:25, loss:0.365907
Epoch:3, Iter:30, loss:0.352298
Epoch:3, Iter:35, loss:0.339518
Epoch:3, Iter:40, loss:0.327470
After 3 epoch,test accuracy is 0.990000
Epoch:4
Epoch:4, Iter:0, loss:0.302066
Epoch:4, Iter:5, loss:0.286029
Epoch:4, Iter:10, loss:0.272045
Epoch:4, Iter:15, loss:0.259592
Epoch:4, Iter:20, loss:0.247963
Epoch:4, Iter:25, loss:0.236982
Epoch:4, Iter:30, loss:0.226625
Epoch:4, Iter:35, loss:0.216845
Epoch:4, Iter:40, loss:0.207597
After 4 epoch,test accuracy is 0.985000
Epoch:5
Epoch:5, Iter:0, loss:0.205532
Epoch:5, Iter:5, loss:0.194966
Epoch:5, Iter:10, loss:0.187082
Epoch:5, Iter:15, loss:0.179286
Epoch:5, Iter:20, loss:0.172329
Epoch:5, Iter:25, loss:0.165981
Epoch:5, Iter:30, loss:0.160231
Epoch:5, Iter:35, loss:0.154836
Epoch:5, Iter:40, loss:0.149780
After 5 epoch,test accuracy is 0.985000
Epoch:6
Epoch:6, Iter:0, loss:0.165187
Epoch:6, Iter:5, loss:0.157391
Epoch:6, Iter:10, loss:0.149630
Epoch:6, Iter:15, loss:0.142534
Epoch:6, Iter:20, loss:0.136197
Epoch:6, Iter:25, loss:0.130554
Epoch:6, Iter:30, loss:0.125456
Epoch:6, Iter:35, loss:0.120786
Epoch:6, Iter:40, loss:0.116467
After 6 epoch,test accuracy is 0.970000
Epoch:7
Epoch:7, Iter:0, loss:0.096673
Epoch:7, Iter:5, loss:0.092360
Epoch:7, Iter:10, loss:0.086961
Epoch:7, Iter:15, loss:0.084012
Epoch:7, Iter:20, loss:0.080848
Epoch:7, Iter:25, loss:0.078112
Epoch:7, Iter:30, loss:0.075584
Epoch:7, Iter:35, loss:0.073226
Epoch:7, Iter:40, loss:0.071017
After 7 epoch,test accuracy is 0.975000
Epoch:8
Epoch:8, Iter:0, loss:0.169294
Epoch:8, Iter:5, loss:0.159428
Epoch:8, Iter:10, loss:0.153244
Epoch:8, Iter:15, loss:0.147434
Epoch:8, Iter:20, loss:0.142029
Epoch:8, Iter:25, loss:0.137126
Epoch:8, Iter:30, loss:0.132623
Epoch:8, Iter:35, loss:0.128387
Epoch:8, Iter:40, loss:0.124386
After 8 epoch,test accuracy is 0.990000
Epoch:9
Epoch:9, Iter:0, loss:0.084787
Epoch:9, Iter:5, loss:0.078379
Epoch:9, Iter:10, loss:0.073668
Epoch:9, Iter:15, loss:0.070215
Epoch:9, Iter:20, loss:0.067355
Epoch:9, Iter:25, loss:0.064733
Epoch:9, Iter:30, loss:0.062382
Epoch:9, Iter:35, loss:0.060253
Epoch:9, Iter:40, loss:0.058297
After 9 epoch,test accuracy is 0.965000
Epoch:10
Epoch:10, Iter:0, loss:0.124418
Epoch:10, Iter:5, loss:0.114758
Epoch:10, Iter:10, loss:0.111773
Epoch:10, Iter:15, loss:0.108822
Epoch:10, Iter:20, loss:0.106253
Epoch:10, Iter:25, loss:0.104059
Epoch:10, Iter:30, loss:0.102024
Epoch:10, Iter:35, loss:0.100087
Epoch:10, Iter:40, loss:0.098244
After 10 epoch,test accuracy is 0.990000
Epoch:11
Epoch:11, Iter:0, loss:0.079318
Epoch:11, Iter:5, loss:0.074410
Epoch:11, Iter:10, loss:0.069275
Epoch:11, Iter:15, loss:0.065198
Epoch:11, Iter:20, loss:0.061933
Epoch:11, Iter:25, loss:0.059090
Epoch:11, Iter:30, loss:0.056552
Epoch:11, Iter:35, loss:0.054334
Epoch:11, Iter:40, loss:0.052364
After 11 epoch,test accuracy is 0.980000
Epoch:12
Epoch:12, Iter:0, loss:0.089489
Epoch:12, Iter:5, loss:0.085162
Epoch:12, Iter:10, loss:0.081850
Epoch:12, Iter:15, loss:0.079369
Epoch:12, Iter:20, loss:0.077012
Epoch:12, Iter:25, loss:0.074921
Epoch:12, Iter:30, loss:0.072979
Epoch:12, Iter:35, loss:0.071152
Epoch:12, Iter:40, loss:0.069407
After 12 epoch,test accuracy is 0.975000
Epoch:13
Epoch:13, Iter:0, loss:0.238772
Epoch:13, Iter:5, loss:0.220581
Epoch:13, Iter:10, loss:0.205470
Epoch:13, Iter:15, loss:0.195101
Epoch:13, Iter:20, loss:0.188185
Epoch:13, Iter:25, loss:0.182077
Epoch:13, Iter:30, loss:0.176706
Epoch:13, Iter:35, loss:0.172282
Epoch:13, Iter:40, loss:0.168455
After 13 epoch,test accuracy is 0.990000
Epoch:14
Epoch:14, Iter:0, loss:0.132144
Epoch:14, Iter:5, loss:0.106548
Epoch:14, Iter:10, loss:0.096048
Epoch:14, Iter:15, loss:0.091068
Epoch:14, Iter:20, loss:0.086365
Epoch:14, Iter:25, loss:0.083849
Epoch:14, Iter:30, loss:0.081931
Epoch:14, Iter:35, loss:0.080258
Epoch:14, Iter:40, loss:0.078581
After 14 epoch,test accuracy is 0.950000
Epoch:15
Epoch:15, Iter:0, loss:0.123271
Epoch:15, Iter:5, loss:0.131147
Epoch:15, Iter:10, loss:0.078805
Epoch:15, Iter:15, loss:0.078475
Epoch:15, Iter:20, loss:0.070494
Epoch:15, Iter:25, loss:0.068975
Epoch:15, Iter:30, loss:0.066681
Epoch:15, Iter:35, loss:0.065260
Epoch:15, Iter:40, loss:0.063765
After 15 epoch,test accuracy is 0.995000
Epoch:16
Epoch:16, Iter:0, loss:0.155497
Epoch:16, Iter:5, loss:0.122249
Epoch:16, Iter:10, loss:0.121781
Epoch:16, Iter:15, loss:0.118506
Epoch:16, Iter:20, loss:0.116796
Epoch:16, Iter:25, loss:0.115072
Epoch:16, Iter:30, loss:0.113538
Epoch:16, Iter:35, loss:0.112178
Epoch:16, Iter:40, loss:0.110849
After 16 epoch,test accuracy is 0.990000
Epoch:17
Epoch:17, Iter:0, loss:0.171954
Epoch:17, Iter:5, loss:0.136426
Epoch:17, Iter:10, loss:0.120416
Epoch:17, Iter:15, loss:0.115077
Epoch:17, Iter:20, loss:0.109985
Epoch:17, Iter:25, loss:0.106659
Epoch:17, Iter:30, loss:0.103621
Epoch:17, Iter:35, loss:0.101150
Epoch:17, Iter:40, loss:0.098991
After 17 epoch,test accuracy is 0.975000
Epoch:18
Epoch:18, Iter:0, loss:0.059284
Epoch:18, Iter:5, loss:0.034316
Epoch:18, Iter:10, loss:0.029591
Epoch:18, Iter:15, loss:0.028698
Epoch:18, Iter:20, loss:0.026980
Epoch:18, Iter:25, loss:0.026150
Epoch:18, Iter:30, loss:0.025403
Epoch:18, Iter:35, loss:0.024759
Epoch:18, Iter:40, loss:0.024147
After 18 epoch,test accuracy is 0.970000
Epoch:19
Epoch:19, Iter:0, loss:0.056556
Epoch:19, Iter:5, loss:0.036748
Epoch:19, Iter:10, loss:0.034492
Epoch:19, Iter:15, loss:0.033583
Epoch:19, Iter:20, loss:0.032706
Epoch:19, Iter:25, loss:0.032074
Epoch:19, Iter:30, loss:0.031541
Epoch:19, Iter:35, loss:0.031068
Epoch:19, Iter:40, loss:0.030649
After 19 epoch,test accuracy is 0.980000
Epoch:20
Epoch:20, Iter:0, loss:0.085534
Epoch:20, Iter:5, loss:0.059223
Epoch:20, Iter:10, loss:0.057679
Epoch:20, Iter:15, loss:0.057558
Epoch:20, Iter:20, loss:0.056527
Epoch:20, Iter:25, loss:0.055705
Epoch:20, Iter:30, loss:0.055179
Epoch:20, Iter:35, loss:0.054754
Epoch:20, Iter:40, loss:0.054346
After 20 epoch,test accuracy is 0.980000
Epoch:21
Epoch:21, Iter:0, loss:0.100939
Epoch:21, Iter:5, loss:0.075176
Epoch:21, Iter:10, loss:0.075331
Epoch:21, Iter:15, loss:0.073093
Epoch:21, Iter:20, loss:0.071342
Epoch:21, Iter:25, loss:0.070578
Epoch:21, Iter:30, loss:0.070008
Epoch:21, Iter:35, loss:0.069446
Epoch:21, Iter:40, loss:0.068882
After 21 epoch,test accuracy is 0.975000
Epoch:22
Epoch:22, Iter:0, loss:0.050664
Epoch:22, Iter:5, loss:0.040133
Epoch:22, Iter:10, loss:0.039782
Epoch:22, Iter:15, loss:0.038281
Epoch:22, Iter:20, loss:0.036904
Epoch:22, Iter:25, loss:0.036063
Epoch:22, Iter:30, loss:0.035401
Epoch:22, Iter:35, loss:0.034765
Epoch:22, Iter:40, loss:0.034142
After 22 epoch,test accuracy is 0.990000
Epoch:23
Epoch:23, Iter:0, loss:0.022610
Epoch:23, Iter:5, loss:0.012676
Epoch:23, Iter:10, loss:0.012049
Epoch:23, Iter:15, loss:0.011079
Epoch:23, Iter:20, loss:0.010711
Epoch:23, Iter:25, loss:0.010575
Epoch:23, Iter:30, loss:0.010426
Epoch:23, Iter:35, loss:0.010294
Epoch:23, Iter:40, loss:0.010176
After 23 epoch,test accuracy is 0.945000
Epoch:24
Epoch:24, Iter:0, loss:0.104568
Epoch:24, Iter:5, loss:0.027092
Epoch:24, Iter:10, loss:0.021458
Epoch:24, Iter:15, loss:0.020884
Epoch:24, Iter:20, loss:0.018340
Epoch:24, Iter:25, loss:0.017442
Epoch:24, Iter:30, loss:0.017181
Epoch:24, Iter:35, loss:0.016890
Epoch:24, Iter:40, loss:0.016608
After 24 epoch,test accuracy is 0.975000
Epoch:25
Epoch:25, Iter:0, loss:0.054752
Epoch:25, Iter:5, loss:0.016544
Epoch:25, Iter:10, loss:0.016195
Epoch:25, Iter:15, loss:0.014033
Epoch:25, Iter:20, loss:0.013155
Epoch:25, Iter:25, loss:0.012427
Epoch:25, Iter:30, loss:0.012143
Epoch:25, Iter:35, loss:0.012000
Epoch:25, Iter:40, loss:0.011835
After 25 epoch,test accuracy is 0.980000
Epoch:26
Epoch:26, Iter:0, loss:0.020831
Epoch:26, Iter:5, loss:0.019126
Epoch:26, Iter:10, loss:0.017618
Epoch:26, Iter:15, loss:0.016578
Epoch:26, Iter:20, loss:0.015708
Epoch:26, Iter:25, loss:0.014924
Epoch:26, Iter:30, loss:0.014231
Epoch:26, Iter:35, loss:0.013621
Epoch:26, Iter:40, loss:0.013078
After 26 epoch,test accuracy is 0.985000
Epoch:27
Epoch:27, Iter:0, loss:0.085767
Epoch:27, Iter:5, loss:0.075604
Epoch:27, Iter:10, loss:0.069478
Epoch:27, Iter:15, loss:0.067423
Epoch:27, Iter:20, loss:0.066220
Epoch:27, Iter:25, loss:0.065093
Epoch:27, Iter:30, loss:0.064088
Epoch:27, Iter:35, loss:0.063241
Epoch:27, Iter:40, loss:0.062368
After 27 epoch,test accuracy is 0.985000
Epoch:28
Epoch:28, Iter:0, loss:0.025704
Epoch:28, Iter:5, loss:0.019883
Epoch:28, Iter:10, loss:0.018221
Epoch:28, Iter:15, loss:0.017707
Epoch:28, Iter:20, loss:0.017133
Epoch:28, Iter:25, loss:0.016720
Epoch:28, Iter:30, loss:0.016294
Epoch:28, Iter:35, loss:0.015937
Epoch:28, Iter:40, loss:0.015599
After 28 epoch,test accuracy is 0.975000
Epoch:29
Epoch:29, Iter:0, loss:0.081774
Epoch:29, Iter:5, loss:0.080829
Epoch:29, Iter:10, loss:0.077817
Epoch:29, Iter:15, loss:0.075496
Epoch:29, Iter:20, loss:0.073994
Epoch:29, Iter:25, loss:0.072654
Epoch:29, Iter:30, loss:0.071365
Epoch:29, Iter:35, loss:0.070161
Epoch:29, Iter:40, loss:0.069027
After 29 epoch,test accuracy is 0.985000
Epoch:30
Epoch:30, Iter:0, loss:0.022823
Epoch:30, Iter:5, loss:0.016409
Epoch:30, Iter:10, loss:0.013485
Epoch:30, Iter:15, loss:0.012986
Epoch:30, Iter:20, loss:0.012578
Epoch:30, Iter:25, loss:0.012341
Epoch:30, Iter:30, loss:0.012108
Epoch:30, Iter:35, loss:0.011892
Epoch:30, Iter:40, loss:0.011682
After 30 epoch,test accuracy is 0.960000
Epoch:31
Epoch:31, Iter:0, loss:0.023555
Epoch:31, Iter:5, loss:0.013961
Epoch:31, Iter:10, loss:0.008281
Epoch:31, Iter:15, loss:0.007225
Epoch:31, Iter:20, loss:0.006878
Epoch:31, Iter:25, loss:0.006575
Epoch:31, Iter:30, loss:0.006493
Epoch:31, Iter:35, loss:0.006388
Epoch:31, Iter:40, loss:0.006302
After 31 epoch,test accuracy is 0.985000
Epoch:32
Epoch:32, Iter:0, loss:0.117223
Epoch:32, Iter:5, loss:0.099215
Epoch:32, Iter:10, loss:0.094528
Epoch:32, Iter:15, loss:0.088871
Epoch:32, Iter:20, loss:0.084306
Epoch:32, Iter:25, loss:0.079940
Epoch:32, Iter:30, loss:0.075245
Epoch:32, Iter:35, loss:0.070715
Epoch:32, Iter:40, loss:0.067033
After 32 epoch,test accuracy is 0.970000
Epoch:33
Epoch:33, Iter:0, loss:0.094884
Epoch:33, Iter:5, loss:0.072200
Epoch:33, Iter:10, loss:0.068369
Epoch:33, Iter:15, loss:0.064205
Epoch:33, Iter:20, loss:0.061424
Epoch:33, Iter:25, loss:0.059275
Epoch:33, Iter:30, loss:0.057364
Epoch:33, Iter:35, loss:0.055653
Epoch:33, Iter:40, loss:0.053989
After 33 epoch,test accuracy is 0.985000
Epoch:34
Epoch:34, Iter:0, loss:0.086641
Epoch:34, Iter:5, loss:0.075208
Epoch:34, Iter:10, loss:0.067836
Epoch:34, Iter:15, loss:0.064465
Epoch:34, Iter:20, loss:0.063379
Epoch:34, Iter:25, loss:0.062401
Epoch:34, Iter:30, loss:0.061456
Epoch:34, Iter:35, loss:0.060524
Epoch:34, Iter:40, loss:0.059625
After 34 epoch,test accuracy is 0.990000
Epoch:35
Epoch:35, Iter:0, loss:0.085850
Epoch:35, Iter:5, loss:0.087053
Epoch:35, Iter:10, loss:0.056269
Epoch:35, Iter:15, loss:0.051373
Epoch:35, Iter:20, loss:0.050710
Epoch:35, Iter:25, loss:0.048840
Epoch:35, Iter:30, loss:0.048009
Epoch:35, Iter:35, loss:0.047486
Epoch:35, Iter:40, loss:0.046877
After 35 epoch,test accuracy is 0.950000
Epoch:36
Epoch:36, Iter:0, loss:0.110865
Epoch:36, Iter:5, loss:0.063753
Epoch:36, Iter:10, loss:0.059193
Epoch:36, Iter:15, loss:0.052947
Epoch:36, Iter:20, loss:0.051269
Epoch:36, Iter:25, loss:0.049551
Epoch:36, Iter:30, loss:0.048330
Epoch:36, Iter:35, loss:0.047244
Epoch:36, Iter:40, loss:0.046140
After 36 epoch,test accuracy is 0.950000
Epoch:37
Epoch:37, Iter:0, loss:0.095408
Epoch:37, Iter:5, loss:0.016991
Epoch:37, Iter:10, loss:0.004566
Epoch:37, Iter:15, loss:0.005232
Epoch:37, Iter:20, loss:0.005220
Epoch:37, Iter:25, loss:0.004570
Epoch:37, Iter:30, loss:0.004308
Epoch:37, Iter:35, loss:0.004263
Epoch:37, Iter:40, loss:0.004202
After 37 epoch,test accuracy is 0.965000
Epoch:38
Epoch:38, Iter:0, loss:0.146583
Epoch:38, Iter:5, loss:0.024779
Epoch:38, Iter:10, loss:0.031857
Epoch:38, Iter:15, loss:0.022896
Epoch:38, Iter:20, loss:0.023115
Epoch:38, Iter:25, loss:0.021483
Epoch:38, Iter:30, loss:0.020961
Epoch:38, Iter:35, loss:0.020465
Epoch:38, Iter:40, loss:0.019959
After 38 epoch,test accuracy is 0.950000
Epoch:39
Epoch:39, Iter:0, loss:0.041242
Epoch:39, Iter:5, loss:0.016075
Epoch:39, Iter:10, loss:0.013749
Epoch:39, Iter:15, loss:0.013720
Epoch:39, Iter:20, loss:0.013396
Epoch:39, Iter:25, loss:0.013054
Epoch:39, Iter:30, loss:0.012737
Epoch:39, Iter:35, loss:0.012477
Epoch:39, Iter:40, loss:0.012252
After 39 epoch,test accuracy is 0.985000
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章