Tensorflow2.0之用粒子羣算法優化卷積神經網絡的初始權重

一、構建網絡

在這裏,使用 Mnist 數據集進行演示。

1、導入需要的庫和數據集

import tensorflow as tf
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()

2、對數據集進行處理

因爲輸入卷積神經網絡的數據形狀爲 (batch_size, height, width, channels),但輸入進來的數據集的形狀爲 (batch_size, height, width),所以在這裏要增加一個維度,並把數據類型從整型轉換成浮點型。

x_train = tf.expand_dims(x_train, axis=3)
x_test = tf.expand_dims(x_test, axis=3)
x_train = tf.cast(x_train, tf.float32)
x_test = tf.cast(x_test, tf.float32)

3、對數據集切片處理

dataset_train = tf.data.Dataset.from_tensor_slices((x_train, y_train)).shuffle(y_train.shape[0]).batch(64)
dataset_test = tf.data.Dataset.from_tensor_slices((x_test, y_test)).shuffle(y_test.shape[0]).batch(200)

4、構建分類器

4.1 Conv2D 層

class Conv2D(tf.keras.layers.Layer):
    def __init__(self, output_dim, kernel_initial, kernel_size=(3, 3), strides=(1, 1, 1, 1)):
        super().__init__()
        self.output_dim = output_dim
        self.kernel_size = kernel_size
        self.strides = strides
        self.kernel_initial = kernel_initial
        
    def build(self, input_shape):
        bias_shape = tf.TensorShape((self.output_dim, ))
        self.kernel = tf.Variable(self.kernel_initial)
        self.bias = tf.Variable(tf.zeros(shape=bias_shape))

    def call(self, inputs):
        output = tf.nn.bias_add(tf.nn.conv2d(inputs, filters=self.kernel, strides=self.strides, padding='SAME'), self.bias)
        return tf.nn.relu(output)

4.2 CNN 模塊

每個 CNN 模塊都包含一層卷積層,一層池化層和一層批歸一化層。

class CnnSection(tf.keras.Model):
    def __init__(self, num_channels, kernel_initial):
        super().__init__()
        self.conv = Conv2D(num_channels, kernel_initial)
        self.pool = tf.keras.layers.MaxPool2D(pool_size=2,
                                              strides=2,
                                              padding='same')
        self.bn = tf.keras.layers.BatchNormalization()
        
    def call(self, inputs):
        x = self.conv(inputs)
        x = self.pool(x)
        x = self.bn(x)
        
        return x

4.3 Dense 模塊

構建方法與 CNN 模塊類似,每個 Dense 模塊都包含一層全連接層和一層批歸一化層。

class DenseSection(tf.keras.Model):
    def __init__(self, units):
        super().__init__()
        self.dense = tf.keras.layers.Dense(units, activation='relu')
        self.bn = tf.keras.layers.BatchNormalization()
        
    def call(self, inputs):
        x = self.dense(inputs)
        x = self.bn(x)
        
        return x

4.4 分類器

在這裏使用2個卷積層,每個卷積層有32個卷積核;1個全連接層,含32個神經元。

class Classifier(tf.keras.Model):
    def __init__(self, kernel_initial):
        super().__init__()
        self.num_cnn = 1
        self.num_channels = [32]
        self.num_dense = 1
        self.dense_units = [32]
        
        self.CNN=[]
        for i in range(self.num_cnn):
            self.CNN.append(CnnSection(self.num_channels[i], kernel_initial[i]))
        
        self.flatten = tf.keras.layers.Flatten()
        
        self.DENSE=[]
        for i in range(self.num_dense):
            self.DENSE.append(DenseSection(self.dense_units[i]))
        self.DENSE.append(tf.keras.layers.Dense(10, activation='softmax'))
    
    def call(self, inputs):
        x = inputs
        for layer in self.CNN.layers:
            x = layer(x)
        x = self.flatten(x)
        
        for layer in self.DENSE.layers:
            x = layer(x)
        
        return x

4.4、設置參數

該參數即粒子羣算法中的粒子,

classifier = Classifier(kernel)

在這裏,kernel 是一個列表,包含着兩個卷積層中每個卷積核的初始權重,分別記爲 kernel1kernel2,由於每個卷積層中權重的形狀爲:
size=(kernelwidth,kernelheight,channelsin,channelsout)size=(kernel_{width}, kernel_{height}, channels_{in}, channels_{out})
所以在這個示例中:
sizekernel1=(3,3,1,32)size_{kernel1}=(3, 3, 1, 32)
sizekernel2=(3,3,32,32)size_{kernel2}=(3, 3, 32, 32)
比如,我們設:

limit1 = np.sqrt(6/(3*3*33))
limit2 = np.sqrt(6/(3*3*64))
kernel = []
kernel1 = np.random.uniform(-limit1, limit1, size=(3, 3, 1, 32))
kernel1 = tf.cast(kernel1, tf.float32)
kernel2 = np.random.uniform(-limit2, limit2, size=(3, 3, 32, 32))
kernel2 = tf.cast(kernel2, tf.float32)
kernel.append(kernel1)
kernel.append(kernel2)

這裏使用的初始化方法就是 tf.keras.layers.Conv2D 函數的默認卷積核初始化方法 ‘glorot_uniform’,其基本原理可以參考:Tensorflow2.0 中 tf.keras.layers.Conv2D 裏的初始化方法 ‘glorot_uniform’ 到底是個啥?

5、構造損失函數

loss_obj_classifier = tf.keras.losses.CategoricalCrossentropy()
def loss_classifier(real, pred):
    l = loss_obj_classifier(real, pred)
    return l

6、構造梯度下降函數

opt_classifier = tf.keras.optimizers.Adam()
def train_step_classifier(x, y):
    with tf.GradientTape() as tape:
        pred = classifier(x)
        real = tf.one_hot(y, depth=10)
        l = loss_classifier(real, pred)
    grad = tape.gradient(l, classifier.trainable_variables)
    opt_classifier.apply_gradients(zip(grad, classifier.trainable_variables))
    return l, tf.cast(tf.argmax(pred, axis=1), dtype=tf.int32), y

7、訓練

epochs_classifier = 100
for epoch in range(epochs_classifier):
    for i, (feature, label) in enumerate(dataset_train):
        loss, pred_label, real_label = train_step_classifier(feature, label)
        if (i+1) % 100 == 0:
            print('第{}次訓練中第{}批的誤差爲{}'.format(epoch+1, i+1, loss))
    print('第{}次訓練後的誤差爲{}'.format(epoch+1, loss))

total_correct = 0
total_num = 0
for feature, label in dataset_test:
    prob = classifier(feature)
    pred = tf.argmax(prob, axis=1)
    pred = tf.cast(pred, tf.int32)
    correct = tf.equal(pred, label)
    correct = tf.reduce_sum(tf.cast(correct, dtype=tf.int32))
    total_correct += int(correct)
    total_num += feature.shape[0]
acc = total_correct / total_num
print('測試集的準確率爲{}'.format(acc))

在這裏,測試集的準確率就代表了粒子羣算法中每個粒子的適應度。
至此,模型已經構建完畢,我們將上面的模型寫入 project.py 文件,並將數據導入過程以及訓練過程分別封裝成函數 dataset_train, dataset_test = load()acc = classify(dataset_train, dataset_test, kernel)

二、粒子羣算法

常規的粒子羣算法介紹可以參考我的另一篇文章粒子羣算法求解最大值問題詳解(附python代碼)

1、導入需要的庫

import numpy as np
import project

2、設置參數

w = 0.8    			#慣性因子
c1 = 2     			#自身認知因子
c2 = 2     			#社會認知因子
r1 = 0.6  			#自身認知學習率
r2 = 0.3  			#社會認知學習率
pN = 3              #粒子數量
dim = 3             #搜索維度  
max_iter = 300    	#最大迭代次數

3、導入數據

dataset_train, dataset_test = project.load()

4、適應度函數

適應度也就是測試集分類的準確率。

def get_fitness(params): 
    return project.classify(dataset_train, dataset_test, kernel)

5、生成第一代粒子羣

X = []
V = []
p_best = []
p_bestfit = []
g_bestfit = -1e15
for i in range(pN):  
    #初始化每一個粒子的位置和速度
    kernel = []
    kernel1 = np.random.normal(0, 1, size=(3, 3, 1, 32))
    kernel1 = tf.cast(kernel1, tf.float32)
    kernel2 = np.random.normal(0, 1, size=(3, 3, 32, 32))
    kernel2 = tf.cast(kernel2, tf.float32)
    kernel.append(kernel1)
    kernel.append(kernel2)
    X.append(kernel)
    
    velocity = []
    velocity1 = np.random.normal(0, 1, size=(3, 3, 1, 32))
    velocity1 = tf.cast(velocity1, tf.float32)
    velocity2 = np.random.normal(0, 1, size=(3, 3, 32, 32))
    velocity2 = tf.cast(velocity2, tf.float32)
    velocity.append(velocity1)
    velocity.append(velocity2)
    V.append(velocity)
    
    p_best.append(kernel)
    p_bestfit.append(get_fitness(kernel))
    if p_bestfit[i] > g_bestfit:  
        g_bestfit = p_bestfit[i] 
        g_best = X[i]

6、構建函數實現列表的加減乘操作

6.1 加

實現兩個列表對應元素相加。

def sum_(v1, v2):
    return list(map(lambda x: x[0]+x[1], zip(v1, v2)))

6.2 減

實現兩個列表對應元素相減。

def subtract(v1, v2):
    return list(map(lambda x: x[0]-x[1], zip(v1, v2)))

6.3 乘

實現一個列表中的所有元素同乘一個實數。

def multiply(x, w):
    res = []
    for i in x:
        res.append(i*w)
    return res

7、優化

fitness = []  

for _ in range(max_iter):  
    for i in range(pN):           #更新g_best\p_best  
        temp = get_fitness(X[i])  #獲得當前位置的適應值
        print('第{}個粒子的適應度爲{}'.format(i+1, temp))
        if temp > p_bestfit[i]:      #更新個體最優  
            p_bestfit[i] = temp  
            p_best[i] = X[i]  
            if p_bestfit[i] > g_bestfit:  #更新全局最優  
                g_best = X[i]  
                g_bestfit = p_bestfit[i]  

    for i in range(pN):  	#更新權重
        dist1 = subtract(p_best[i], X[i])
        dist2 = subtract(g_best, X[i])
        dist = sum_(multiply(dist1, c1*r1), multiply(dist2, c2*r2))
        V[i] = sum_(multiply(V[i], w), dist)
        X[i] = sum_(X[i], V[i])

    fitness.append(g_bestfit)
    print('fitness: ', fitness)
    
print(g_best,g_bestfit)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章