TF2.0模型訓練

概述

這是TF2.0入門筆記【TF2.0模型創建TF2.0模型訓練TF2.0模型保存】中第二篇【TF2.0模型訓練】,本篇將介紹模型的訓練

  • 這裏我會介紹用以下三種方法去演示模型訓練(僅用圖像分類舉例)。
    • 1、通過fit方法訓練模型
    • 2、通過fit_generator方法訓練模型
    • 3、自定義訓練

數據集介紹

該數據集爲tf_flowers,數據集爲五種花朵數據集,分別爲雛菊(daisy),鬱金香(tulips),向日葵(sunflowers),玫瑰(roses),蒲公英(dandelion)。

import pathlib
from tensorflow.keras.utils import get_file

data_root = get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
                     fname='flower_photos', 
                     untar=True, 
                     cache_dir='./', 
                     cache_subdir='datasets')
                     
data_path = pathlib.Path(data_root)

print("data_path:",data_path)
for item in data_path.iterdir():
    print(item)

運行輸出:

Downloading data from https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz
228818944/228813984 [==============================] - 1s 0us/step
data_path: datasets/flower_photos
datasets/flower_photos/daisy
datasets/flower_photos/tulips
datasets/flower_photos/sunflowers
datasets/flower_photos/roses
datasets/flower_photos/LICENSE.txt
datasets/flower_photos/dandelion

1、通過fit方法訓練模型

第一種:通過fit方法訓練模型
步驟:
1、準備數據
2、創建模型
3、編譯模型
4、訓練模型

準備數據

獲取所有花朵圖片的路徑

import random

all_image_paths = list(data_path.glob('*/*'))#獲取子目錄下所有文件
all_image_paths = [str(path) for path in all_image_paths]#把<class 'pathlib.WindowsPath'>轉換成str類型
random.shuffle(all_image_paths)#打亂順序
print(all_image_paths[0])
#輸出:
#datasets\flower_photos\roses\3422228549_f147d6e642.jpg

獲取所有花朵的標籤

label_names = []
for item in data_path.glob('*/'):#獲取目錄下所有文件
    if item.is_dir():#判斷是否是文件夾
        label_names.append(item.name)
    label_names.sort()#整理一下

label_name_index = dict((name, index) for index, name in enumerate(label_names))
print(label_names)
print(label_name_index)
#輸出:
#['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
#{'daisy': 0, 'dandelion': 1, 'roses': 2, 'sunflowers': 3, 'tulips': 4}

#獲取文件的目錄,得到的目錄根據字典映射出標籤
all_image_labels = [label_name_index[pathlib.Path(path).parent.name]for path in all_image_paths]
print(all_image_labels[0])
#輸出:
#2

定義一些變量

input_shape=(192,192,3)
classes    =len(label_names)
batch_size =64
epochs     =10
steps_per_epoch=len(all_image_paths)//batch_size

現在我們有了所有圖片的路徑all_image_paths,以及標籤all_image_labels
現在我們寫一個函數load_preprocess_image去加載並處理圖片,make_image_label_datasets這個函數用來將圖片和標籤整合到一起

import tensorflow as tf

def load_preprocess_image(image_paths):
    image = tf.io.read_file(image_paths)            #img_string
    image = tf.image.decode_jpeg(image, channels=3) #img_tensor
    image = tf.image.resize(image, [192,192])       #img_resize
    image = image/255.0                             #img_normal    
    return image

def make_image_label_datasets(image_paths, image_labels):
    return load_preprocess_image(image_paths), image_labels

製作數據集

datasets = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
image_label_datasets = datasets.map(make_image_label_datasets)

取出兩個圖片及標籤進行可視化

import matplotlib.pyplot as plt
import numpy as np

plt.figure(figsize=(6,6))
n=0
for img,leb in image_label_datasets.take(2):
    n=n+1
    image=np.array(img.numpy()*255.0).astype("uint8")
    plt.subplot(1,2,n)
    plt.title('lebel:'+str(leb.numpy()))
    plt.imshow(image)
    plt.show()

創建模型

考慮到是入門教學,這裏不進行遷移學習,我們來創建一個類似VGG系列的模型,這裏的創建方法用到的是上一節所說的方法二

from tensorflow.keras import Model
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Input
def my_model(input_shape, classes):
    inputs=Input(input_shape)
    # Block 1
    x = Conv2D(64,  (3, 3), activation='relu', padding='same')(inputs)
    x = MaxPooling2D((2, 2), strides=(2, 2))(x)
    # Block 2
    x = Conv2D(128, (3, 3), activation='relu', padding='same')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
    # Block 3
    x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
    x = Conv2D(256, (3, 3), activation='relu', padding='same')(x)
    x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
    
    x = Flatten()(x)
    x = Dense(512, activation='relu')(x)
    x = Dense(256, activation='relu')(x)
    x = Dense(classes, activation='softmax')(x)
    model = Model(inputs, x)
    return model
model = my_model(input_shape, classes)

編譯模型

優化器用的是Adam優化器,因爲標籤是0,1,2,…而不是one-hot 編碼[1, 0, 0,…], [0, 1 0,…], [0, 0, 1,…]。所以損失函數用sparse_categorical_crossentropy而不是categorical_crossentropy

from tensorflow.keras.optimizers import Adam
opt=Adam()
model.compile(optimizer=opt,
              loss='sparse_categorical_crossentropy',
              metrics=["accuracy"])

訓練模型

image_label_datasets = image_label_datasets.shuffle(buffer_size=len(all_image_paths))
image_label_datasets = image_label_datasets.repeat()
image_label_datasets = image_label_datasets.batch(batch_size)
# 當模型在訓練的時候,`prefetch` 使數據集在後臺取得 batch,也就是流水線進行。
image_label_datasets = image_label_datasets.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)

model.fit(image_label_datasets, epochs=epochs, steps_per_epoch=steps_per_epoch)

運行輸出:

Train for 57 steps
Epoch 1/10
57/57 [==============================] - 32s 569ms/step - loss: 1.4404 - accuracy: 0.4046
Epoch 2/10
57/57 [==============================] - 21s 377ms/step - loss: 1.0551 - accuracy: 0.5762
Epoch 3/10
57/57 [==============================] - 21s 368ms/step - loss: 0.9082 - accuracy: 0.6417
Epoch 4/10
57/57 [==============================] - 21s 363ms/step - loss: 0.7993 - accuracy: 0.6853
Epoch 5/10
57/57 [==============================] - 21s 360ms/step - loss: 0.6667 - accuracy: 0.7410
Epoch 6/10
57/57 [==============================] - 19s 337ms/step - loss: 0.4645 - accuracy: 0.8331
Epoch 7/10
57/57 [==============================] - 17s 299ms/step - loss: 0.3154 - accuracy: 0.8890
Epoch 8/10
57/57 [==============================] - 15s 257ms/step - loss: 0.2015 - accuracy: 0.9328
Epoch 9/10
57/57 [==============================] - 14s 254ms/step - loss: 0.1692 - accuracy: 0.9487
Epoch 10/10
57/57 [==============================] - 14s 253ms/step - loss: 0.1205 - accuracy: 0.9638
<tensorflow.python.keras.callbacks.History at 0x7fb7d4467f28>

2、通過fit_generator方法訓練模型

第二種:通過fit_generator方法訓練模型
通過實踐方法一,如果你是新手的話,你一定感受到了製作數據是一件很麻煩的事。
接下來我們將介紹使用ImageDataGenerator類及其flow_from_directory方法進行便捷地讀取數據進行訓練
步驟:
1、構建生成器
2、創建模型
3、編譯模型
4、訓練模型

構建生成器

ImageDataGenerator類及其flow_from_directory方法是有很多參數可用的,詳情可以點擊去官網看手冊(如果你是新手的話,看手冊會很頻繁哦)

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os

def make_Gen(data_path):
    train_dataNums = 0
    train_gen  = ImageDataGenerator(rescale=1/255.0)#只用歸一化
  
    for root, dirs, files in os.walk(data_path):
        for file in files:
            train_dataNums += 1

    return train_gen, train_dataNums

data_path='datasets/flower_photos'
train_gen, train_dataNums = make_Gen(data_path)
train_generator = train_gen.flow_from_directory(
    directory   = data_path,
    target_size = (192,192),
    batch_size  = batch_size,
    class_mode  = 'categorical')#class_mode選'categorical',這時它會自動幫我們處理圖片和標籤
    
print(train_generator.class_indices)
#輸出:
#Found 3670 images belonging to 5 classes.
#{'daisy': 0, 'dandelion': 1, 'roses': 2, 'sunflowers': 3, 'tulips': 4}

創建模型

使用方法一創建的模型

model = my_model((192,192,3), 5)

編譯模型

損失函數用categorical_crossentropy

from tensorflow.keras.optimizers import Adam
opt=Adam()
model.compile(loss='categorical_crossentropy', 
              optimizer=opt, 
              metrics=['accuracy'])

訓練模型

model.fit_generator(train_generator,
                    steps_per_epoch =train_dataNums//batch_size,
                    epochs=epochs)

運行輸出:
根據結果可以看到,這種方法速度比較慢,相同輪數下收斂也沒這麼快,原因可以自己思考一下哦。

Epoch 1/10
57/57 [==============================] - 27s 470ms/step - loss: 1.7245 - accuracy: 0.3236
Epoch 2/10
57/57 [==============================] - 24s 428ms/step - loss: 1.3447 - accuracy: 0.4010
Epoch 3/10
57/57 [==============================] - 24s 421ms/step - loss: 1.2717 - accuracy: 0.4323
Epoch 4/10
57/57 [==============================] - 24s 425ms/step - loss: 1.2436 - accuracy: 0.4507
Epoch 5/10
57/57 [==============================] - 24s 418ms/step - loss: 1.1845 - accuracy: 0.4907
Epoch 6/10
57/57 [==============================] - 24s 422ms/step - loss: 1.0594 - accuracy: 0.5657
Epoch 7/10
57/57 [==============================] - 24s 427ms/step - loss: 0.8960 - accuracy: 0.6521
Epoch 8/10
57/57 [==============================] - 24s 417ms/step - loss: 0.6565 - accuracy: 0.7570
Epoch 9/10
57/57 [==============================] - 24s 419ms/step - loss: 0.4401 - accuracy: 0.8464
Epoch 10/10
57/57 [==============================] - 24s 418ms/step - loss: 0.2753 - accuracy: 0.9121
<tensorflow.python.keras.callbacks.History at 0x7fb77d69ae10>

3、自定義訓練

第三種:自定義訓練
有時候,一些繁雜的任務,或者說你想根據自己的想法進行更多的選擇以及自定義,這時你就可以進行自定義訓練

準備數據

見方法一:準備數據

創建模型

見方法一:創建模型

定義損失函數及優化器

from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import SparseCategoricalCrossentropy

#模型最後的輸出是通過了softmax激活函數,所以這裏from_logits=False
my_loss=SparseCategoricalCrossentropy(from_logits=False)
my_opt =Adam()
def loss(real, pred):
    loss=my_loss(real, pred)
    return loss

@tf.function帶上這句可以加速,train_per_step該函數求每一個steploss梯度更新變量
這裏用到了tensorflow.GradientTape類,GradientTape會監控可訓練變量,詳情可查看文檔,也可以查看我的這篇文章TF2.0 GradientTape()類講解

@tf.function
def train_per_step(inputs, targets):
    with tf.GradientTape() as tape:
        predicts=model(inputs)
        #求loss
        loss_value = loss(real=targets,pred=predicts)
	#根據損失求梯度
    gradients=tape.gradient(loss_value, model.trainable_variables)  
    #把梯度和變量進行綁定
    grads_and_vars=zip(gradients, model.trainable_variables)  
    #進行梯度更新
    my_opt.apply_gradients(grads_and_vars)
    
    return loss_value

訓練模型

打亂數據集,設定batch_size

epochs     = 10
batch_size = 64
#打亂並設定batch_size
image_label_datasets = image_label_datasets.shuffle(buffer_size=len(all_image_paths))
image_label_datasets = image_label_datasets.batch(batch_size, drop_remainder=True)

開始訓練
這裏用到了tensorflow.keras.metrics.Mean類以及tensorflow.keras.metrics.SparseCategoricalAccuracy類,它們都有三個Methods(reset_states, result, update_state),詳情可以看手冊。

import time 

train_loss_results = []#保存loss值
train_accuracy_results = []#保存accuracy值

for epoch in range(epochs):
    start = time.time()
    
    #注意這兩行代碼是在epochs的for循環裏面,
    #每次循環之後會進行重置(重新賦值),所以不用加reset_states()方法
    epoch_loss_avg = tf.keras.metrics.Mean()
    epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()

    for image, label in image_label_datasets:

        batch_loss = train_per_step(image, label)
		#求平均,只要不調用reset_states()方法,之前的值是會累計下來的
        epoch_loss_avg(batch_loss)
        epoch_accuracy(label, model(image))
	#保存loss、accuracy值,可用於可視化
    train_loss_results.append(epoch_loss_avg.result())
    train_accuracy_results.append(epoch_accuracy.result())
    #每一個epoch後打印Loss、Accuracy以及花費的時間
    print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
                                                                epoch_loss_avg.result(),
                                                                epoch_accuracy.result()))
    print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))

運行輸出:

Epoch 000: Loss: 1.128, Accuracy: 54.441%
Time taken for 1 epoch 18.131535291671753 sec

Epoch 001: Loss: 1.002, Accuracy: 62.582%
Time taken for 1 epoch 18.448741674423218 sec

Epoch 002: Loss: 0.895, Accuracy: 66.859%
Time taken for 1 epoch 18.31860089302063 sec

Epoch 003: Loss: 0.761, Accuracy: 74.397%
Time taken for 1 epoch 17.966360569000244 sec

Epoch 004: Loss: 0.585, Accuracy: 81.168%
Time taken for 1 epoch 17.9322772026062 sec

Epoch 005: Loss: 0.410, Accuracy: 89.200%
Time taken for 1 epoch 18.117868900299072 sec

Epoch 006: Loss: 0.269, Accuracy: 94.545%
Time taken for 1 epoch 17.976419687271118 sec

Epoch 007: Loss: 0.139, Accuracy: 97.478%
Time taken for 1 epoch 17.916046380996704 sec

Epoch 008: Loss: 0.094, Accuracy: 98.629%
Time taken for 1 epoch 18.384119987487793 sec

Epoch 009: Loss: 0.154, Accuracy: 98.438%
Time taken for 1 epoch 17.962616682052612 sec

下一篇

TF2.0模型保存

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章