mnist手寫數字識別與優化

mnist手寫數字識別,是AI領域的“Hello World”,我們通過剖析這個程序,來加深對深度學習的認識。

本博客的代碼來自“《PaddlePaddle從入門到煉丹》四——卷積神經網絡”,博客鏈接如下。

https://blog.csdn.net/qq_33200967/article/details/83506694

在這篇博客中,使用了深度學習的深層神經網絡和卷積神經網絡,代碼運行在baidu的AIStudio,選擇GPU運行方式,本文主要記錄在神經網絡不同層深的情況下的數字識別率,來認識神經網絡的特性。

一、深層神經網絡

這一部分,我們有把神經網絡該爲單層/雙層/三層/四層,超過或等於三層一般被稱爲深層神經網絡。

修改代碼,選擇分類器爲多層感知器

# 獲取分類器
model = multilayer_perceptron(image)

1. 單層神經網絡

網絡模型爲,輸入層-->輸出層,修改代碼如下,

# 定義多層感知器
def multilayer_perceptron(input):
    # 第一個全連接層,激活函數爲ReLU
    #hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二個全連接層,激活函數爲ReLU
    #hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=input, size=10, act='softmax')
    return fc

下面是訓練和測試數據的準確率數據,

Pass:0, Batch:0, Cost:3.12611, Accuracy:0.14844
Pass:0, Batch:100, Cost:0.57369, Accuracy:0.84375
Pass:0, Batch:200, Cost:0.34888, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.35908, Accuracy:0.89844
Pass:0, Batch:400, Cost:0.46956, Accuracy:0.85938
Test:0, Cost:0.35950, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.31775, Accuracy:0.93750
Pass:1, Batch:100, Cost:0.27652, Accuracy:0.92188
Pass:1, Batch:200, Cost:0.27065, Accuracy:0.92969
Pass:1, Batch:300, Cost:0.28560, Accuracy:0.89844
Pass:1, Batch:400, Cost:0.41429, Accuracy:0.86719
Test:1, Cost:0.31951, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.26233, Accuracy:0.94531
Pass:2, Batch:100, Cost:0.24122, Accuracy:0.92969
Pass:2, Batch:200, Cost:0.25647, Accuracy:0.92188
Pass:2, Batch:300, Cost:0.27161, Accuracy:0.91406
Pass:2, Batch:400, Cost:0.38802, Accuracy:0.85938
Test:2, Cost:0.30584, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.23583, Accuracy:0.93750
Pass:3, Batch:100, Cost:0.22913, Accuracy:0.92969
Pass:3, Batch:200, Cost:0.24992, Accuracy:0.91406
Pass:3, Batch:300, Cost:0.26481, Accuracy:0.91406
Pass:3, Batch:400, Cost:0.36972, Accuracy:0.86719
Test:3, Cost:0.29924, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.22000, Accuracy:0.93750
Pass:4, Batch:100, Cost:0.22292, Accuracy:0.93750
Pass:4, Batch:200, Cost:0.24542, Accuracy:0.91406
Pass:4, Batch:300, Cost:0.26016, Accuracy:0.91406
Pass:4, Batch:400, Cost:0.35566, Accuracy:0.85938
Test:4, Cost:0.29542, Accuracy:0.93750

令人驚奇的是,單層神經網絡也可以達到約93%的準確率,這個的確出人意外。

2. 雙層神經網絡

網絡模型爲,輸入層->隱層->輸出層。

代碼如下,

# 定義多層感知器
def multilayer_perceptron(input):
    # 第一個全連接層,激活函數爲ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二個全連接層,激活函數爲ReLU
    #hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=hidden1, size=10, act='softmax')
    return fc

訓練和測試準確率如下,

Pass:0, Batch:0, Cost:3.01273, Accuracy:0.13281
Pass:0, Batch:100, Cost:0.47009, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.25585, Accuracy:0.92969
Pass:0, Batch:300, Cost:0.28107, Accuracy:0.92188
Pass:0, Batch:400, Cost:0.42777, Accuracy:0.86719
Test:0, Cost:0.26124, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.16402, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19026, Accuracy:0.93750
Pass:1, Batch:200, Cost:0.18443, Accuracy:0.94531
Pass:1, Batch:300, Cost:0.20063, Accuracy:0.96094
Pass:1, Batch:400, Cost:0.29557, Accuracy:0.90625
Test:1, Cost:0.19170, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.13216, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.12720, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.16064, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.14157, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.22292, Accuracy:0.92969
Test:2, Cost:0.16187, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.10297, Accuracy:0.96875
Pass:3, Batch:100, Cost:0.10456, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.15482, Accuracy:0.95312
Pass:3, Batch:300, Cost:0.10015, Accuracy:0.99219
Pass:3, Batch:400, Cost:0.18723, Accuracy:0.92969
Test:3, Cost:0.14261, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08154, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.09554, Accuracy:0.94531
Pass:4, Batch:200, Cost:0.14740, Accuracy:0.93750
Pass:4, Batch:300, Cost:0.08526, Accuracy:0.99219
Pass:4, Batch:400, Cost:0.15813, Accuracy:0.95312
Test:4, Cost:0.13024, Accuracy:1.00000

經過3個pass的訓練,測試數據可以達到100%的準確率,訓練數據的準確率也達到99%。

3. 三層神經網絡

網絡結構爲,輸入層->隱層->隱層->輸出層。

代碼如下,

# 定義多層感知器
def multilayer_perceptron(input):
    # 第一個全連接層,激活函數爲ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二個全連接層,激活函數爲ReLU
    hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=hidden2, size=10, act='softmax')
    return fc

訓練和測試數據集的準確度如下,

Pass:0, Batch:0, Cost:2.39492, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.38125, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.23671, Accuracy:0.93750
Pass:0, Batch:300, Cost:0.30749, Accuracy:0.91406
Pass:0, Batch:400, Cost:0.40188, Accuracy:0.87500
Test:0, Cost:0.21345, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.12216, Accuracy:0.98438
Pass:1, Batch:100, Cost:0.16529, Accuracy:0.94531
Pass:1, Batch:200, Cost:0.17492, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.12466, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.31326, Accuracy:0.90625
Test:1, Cost:0.15984, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.08932, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.10728, Accuracy:0.96094
Pass:2, Batch:200, Cost:0.13919, Accuracy:0.97656
Pass:2, Batch:300, Cost:0.07744, Accuracy:0.98438
Pass:2, Batch:400, Cost:0.24006, Accuracy:0.94531
Test:2, Cost:0.13407, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.06845, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09714, Accuracy:0.95312
Pass:3, Batch:200, Cost:0.11451, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.06234, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.16804, Accuracy:0.96094
Test:3, Cost:0.11795, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.05074, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.09888, Accuracy:0.96094
Pass:4, Batch:200, Cost:0.09145, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.04734, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.12088, Accuracy:0.96875
Test:4, Cost:0.10956, Accuracy:1.00000

相比較雙層神經網絡,三層神經網絡可以更快的的高準確率,如在第一個pass後,測試數據的準確度就可以達到100%;但是,之後測試數據識別率有降低而後又回到100%的情形,不明白其中的原因。

4. 四層神經網絡

四層神經網絡的結構爲,輸入層->隱層->隱層->隱層->輸出層。

代碼如下,

# 定義多層感知器
def multilayer_perceptron(input):
    # 第一個全連接層,激活函數爲ReLU
    hidden1 = fluid.layers.fc(input=input, size=100, act='relu')
    # 第二個全連接層,激活函數爲ReLU
    hidden2 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 第三個全連接層,激活函數爲ReLU
    hidden3 = fluid.layers.fc(input=hidden1, size=100, act='relu')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=hidden3, size=10, act='softmax')
    return fc

訓練數據和測試數據的準確率如下,

Pass:0, Batch:0, Cost:2.47239, Accuracy:0.11719
Pass:0, Batch:100, Cost:0.45294, Accuracy:0.85156
Pass:0, Batch:200, Cost:0.26287, Accuracy:0.92188
Pass:0, Batch:300, Cost:0.29945, Accuracy:0.89062
Pass:0, Batch:400, Cost:0.45551, Accuracy:0.85938
Test:0, Cost:0.23784, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.16192, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.19751, Accuracy:0.92969
Pass:1, Batch:200, Cost:0.16019, Accuracy:0.95312
Pass:1, Batch:300, Cost:0.16501, Accuracy:0.95312
Pass:1, Batch:400, Cost:0.29107, Accuracy:0.91406
Test:1, Cost:0.15401, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.10620, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.10332, Accuracy:0.95312
Pass:2, Batch:200, Cost:0.14469, Accuracy:0.96094
Pass:2, Batch:300, Cost:0.11048, Accuracy:0.96875
Pass:2, Batch:400, Cost:0.22040, Accuracy:0.94531
Test:2, Cost:0.12461, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.07755, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.07912, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.12591, Accuracy:0.96094
Pass:3, Batch:300, Cost:0.09253, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.17517, Accuracy:0.95312
Test:3, Cost:0.11185, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.06638, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.07160, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.11147, Accuracy:0.96094
Pass:4, Batch:300, Cost:0.08665, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.15449, Accuracy:0.95312
Test:4, Cost:0.10896, Accuracy:1.00000

四層的神經網絡表現收斂快速/穩定且準確率較三層網絡高。但是,是否層數越高越好呢?我沒有接着測試。

二、卷積神經網絡

卷積神經網絡是由一個或多個卷積層、池化層以及全連接層組成。這裏,我們選取一直三個卷積+池化做測試對比。

首先,修改代碼,設置分類器的類型爲CNN(Convolutional Neural Network, 卷積神經網絡)

# 獲取分類器
#model = multilayer_perceptron(image)
model = convolutional_neural_network(image)

1. 含有一個卷積+池化層

網絡結構爲,輸入層->卷積層->池化層->全連接層。

代碼如下,

# 卷積神經網絡
def convolutional_neural_network(input):
    # 第一個卷積層,卷積核大小爲3*3,一共有32個卷積核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一個池化層,池化大小爲2*2,步長爲1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二個卷積層,卷積核大小爲3*3,一共有64個卷積核
    #conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二個池化層,池化大小爲2*2,步長爲1,最大池化
    #pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=pool1, size=10, act='softmax')
    return fc

訓練數據和測試數據準確率如下,

Pass:0, Batch:0, Cost:3.51505, Accuracy:0.10156
Pass:0, Batch:100, Cost:0.34445, Accuracy:0.89844
Pass:0, Batch:200, Cost:0.21319, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.23849, Accuracy:0.94531
Pass:0, Batch:400, Cost:0.47284, Accuracy:0.87500
Test:0, Cost:0.19661, Accuracy:1.00000
Pass:1, Batch:0, Cost:0.18348, Accuracy:0.97656
Pass:1, Batch:100, Cost:0.08346, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.09432, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.14066, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.12486, Accuracy:0.96094
Test:1, Cost:0.12581, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09072, Accuracy:0.97656
Pass:2, Batch:100, Cost:0.09942, Accuracy:0.96875
Pass:2, Batch:200, Cost:0.06352, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.11163, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.10208, Accuracy:0.96875
Test:2, Cost:0.13513, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.09934, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.04291, Accuracy:0.97656
Pass:3, Batch:200, Cost:0.04950, Accuracy:0.98438
Pass:3, Batch:300, Cost:0.09500, Accuracy:0.96875
Pass:3, Batch:400, Cost:0.06909, Accuracy:0.98438
Test:3, Cost:0.12898, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.08588, Accuracy:0.97656
Pass:4, Batch:100, Cost:0.03593, Accuracy:0.98438
Pass:4, Batch:200, Cost:0.04931, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.08851, Accuracy:0.97656
Pass:4, Batch:400, Cost:0.09598, Accuracy:0.97656
Test:4, Cost:0.11659, Accuracy:1.00000

可以看出,一個卷積層的網絡,其訓練數據的準確率已經超過了4層神經網絡,表現優異。

2. 含有2個卷積層的網絡

網絡結構爲, 輸入層->卷積層->池化層->卷積層->池化層->全連接層。

代碼如下,

# 卷積神經網絡
def convolutional_neural_network(input):
    # 第一個卷積層,卷積核大小爲3*3,一共有32個卷積核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一個池化層,池化大小爲2*2,步長爲1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二個卷積層,卷積核大小爲3*3,一共有64個卷積核
    conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二個池化層,池化大小爲2*2,步長爲1,最大池化
    pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=pool2, size=10, act='softmax')
    return fc

測試數據和驗證數據的準確度如下,

Pass:0, Batch:0, Cost:4.55156, Accuracy:0.06250
Pass:0, Batch:100, Cost:0.21274, Accuracy:0.93750
Pass:0, Batch:200, Cost:0.13221, Accuracy:0.95312
Pass:0, Batch:300, Cost:0.14602, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.21743, Accuracy:0.94531
Test:0, Cost:0.10561, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.13267, Accuracy:0.96875
Pass:1, Batch:100, Cost:0.07436, Accuracy:0.96875
Pass:1, Batch:200, Cost:0.05657, Accuracy:0.98438
Pass:1, Batch:300, Cost:0.17919, Accuracy:0.96875
Pass:1, Batch:400, Cost:0.16327, Accuracy:0.97656
Test:1, Cost:0.09448, Accuracy:0.93750
Pass:2, Batch:0, Cost:0.09776, Accuracy:0.98438
Pass:2, Batch:100, Cost:0.03945, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.05310, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.14646, Accuracy:0.97656
Pass:2, Batch:400, Cost:0.06727, Accuracy:0.96875
Test:2, Cost:0.09720, Accuracy:1.00000
Pass:3, Batch:0, Cost:0.06443, Accuracy:0.98438
Pass:3, Batch:100, Cost:0.09163, Accuracy:0.96875
Pass:3, Batch:200, Cost:0.01216, Accuracy:1.00000
Pass:3, Batch:300, Cost:0.10314, Accuracy:0.98438
Pass:3, Batch:400, Cost:0.08002, Accuracy:0.97656
Test:3, Cost:0.11338, Accuracy:1.00000
Pass:4, Batch:0, Cost:0.02246, Accuracy:0.98438
Pass:4, Batch:100, Cost:0.01949, Accuracy:0.99219
Pass:4, Batch:200, Cost:0.06579, Accuracy:0.97656
Pass:4, Batch:300, Cost:0.15396, Accuracy:0.98438
Pass:4, Batch:400, Cost:0.03079, Accuracy:0.99219
Test:4, Cost:0.12499, Accuracy:1.00000

包含2個卷積的神經網絡,其訓練數據的準確度十分穩定的接近100%,這個的確十分厲害;不過,測試數據的準確度不是在前2個pass只達到93%,不知是和原因。

3. 含有3個卷積的神經網絡

網絡結構爲, 輸入層->卷積層->池化層->卷積層->池化層->卷積層->池化層->全連接層。

代碼如下,第三個卷積層的代碼是我自己設的,我不確定第三個卷積層是否要用128個卷積核。設置128個卷積核,是因爲看到第一層設了32個卷積核,第二層設了64個卷積核,所以我猜第三層要設爲128個卷積核,但是我沒有任何的數學依據。

# 卷積神經網絡
def convolutional_neural_network(input):
    # 第一個卷積層,卷積核大小爲3*3,一共有32個卷積核
    conv1 = fluid.layers.conv2d(input = input, num_filters=32, filter_size=3, stride=1)
    # 第一個池化層,池化大小爲2*2,步長爲1,最大池化
    pool1 = fluid.layers.pool2d(input=conv1, pool_size=2, pool_stride=1, pool_type='max')
    # 第二個卷積層,卷積核大小爲3*3,一共有64個卷積核
    conv2 = fluid.layers.conv2d(input=pool1, num_filters=64, filter_size=3, stride=1)
    # 第二個池化層,池化大小爲2*2,步長爲1,最大池化
    pool2 = fluid.layers.pool2d(input=conv2, pool_size=2, pool_stride=1, pool_type='max')
    # 第三個卷積層,卷積核大小爲3*3,一共有128個卷積核
    conv3 = fluid.layers.conv2d(input=pool2, num_filters=128, filter_size=3, stride=1)
    # 第三個池化層,池化大小爲2*2,步長爲1,最大池化
    pool3 = fluid.layers.pool2d(input=conv3, pool_size=2, pool_stride=1, pool_type='max')
    # 以softmax爲激活函數的全連接輸出層,大小爲label大小
    fc = fluid.layers.fc(input=pool3, size=10, act='softmax')
    return fc

訓練數據和測試數據的準確度如下,

Pass:0, Batch:0, Cost:6.76483, Accuracy:0.16406
Pass:0, Batch:100, Cost:0.13075, Accuracy:0.95312
Pass:0, Batch:200, Cost:0.18448, Accuracy:0.96875
Pass:0, Batch:300, Cost:0.21740, Accuracy:0.97656
Pass:0, Batch:400, Cost:0.40639, Accuracy:0.92969
Test:0, Cost:0.21052, Accuracy:0.93750
Pass:1, Batch:0, Cost:0.22828, Accuracy:0.96094
Pass:1, Batch:100, Cost:0.06976, Accuracy:0.97656
Pass:1, Batch:200, Cost:0.15817, Accuracy:0.96875
Pass:1, Batch:300, Cost:0.16659, Accuracy:0.98438
Pass:1, Batch:400, Cost:0.16523, Accuracy:0.96875
Test:1, Cost:0.14129, Accuracy:1.00000
Pass:2, Batch:0, Cost:0.15643, Accuracy:0.96875
Pass:2, Batch:100, Cost:0.04042, Accuracy:0.98438
Pass:2, Batch:200, Cost:0.09001, Accuracy:0.98438
Pass:2, Batch:300, Cost:0.19979, Accuracy:0.96094
Pass:2, Batch:400, Cost:0.26533, Accuracy:0.96094
Test:2, Cost:0.34692, Accuracy:0.93750
Pass:3, Batch:0, Cost:0.32040, Accuracy:0.97656
Pass:3, Batch:100, Cost:0.23548, Accuracy:0.96094
Pass:3, Batch:200, Cost:0.14403, Accuracy:0.96875
Pass:3, Batch:300, Cost:0.10629, Accuracy:0.97656
Pass:3, Batch:400, Cost:0.36311, Accuracy:0.94531
Test:3, Cost:0.34852, Accuracy:0.93750
Pass:4, Batch:0, Cost:0.12174, Accuracy:0.99219
Pass:4, Batch:100, Cost:0.15106, Accuracy:0.96875
Pass:4, Batch:200, Cost:0.17723, Accuracy:0.96875
Pass:4, Batch:300, Cost:0.18383, Accuracy:0.96875
Pass:4, Batch:400, Cost:0.17384, Accuracy:0.96094
Test:4, Cost:0.16233, Accuracy:1.00000

奇怪的是,3個卷積的並沒有比2個卷積的更好,難道代碼設置有問題?

 

 

 

 

 

 

 

 

 

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章