深度學習筆記-----基於TensorFlow2.2.0代碼練習(第七課)

寫在正文之前:

這篇緊接着上一篇的博文
深度學習筆記-----基於TensorFlow2.2.0代碼練習(第五課)
主要寫的是TensorFlow2.0的代碼練習,跟隨着[KGP Talkie的【TensorFlow 2.0】實戰進階教程]進行學習,並將其中一些不適用的代碼錯誤進行修改。
本文跟隨視頻油管非常火的【TensorFlow 2.0】實戰進階教程(中英字幕+代碼實戰)第七課
————————————————

正文

!pip install tensorflow-gpu
Collecting tensorflow-gpu
[?25l  Downloading https://files.pythonhosted.org/packages/31/bf/c28971266ca854a64f4b26f07c4112ddd61f30b4d1f18108b954a746f8ea/tensorflow_gpu-2.2.0-cp36-cp36m-manylinux2010_x86_64.whl (516.2MB)
[K     |████████████████████████████████| 516.2MB 28kB/s 
[?25hRequirement already satisfied: wheel>=0.26; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.34.2)
Requirement already satisfied: opt-einsum>=2.3.2 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (3.2.1)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.3.3)
Requirement already satisfied: protobuf>=3.8.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (3.10.0)
Requirement already satisfied: six>=1.12.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.12.0)
Requirement already satisfied: tensorboard<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (2.2.2)
Requirement already satisfied: scipy==1.4.1; python_version >= "3" in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.4.1)
Requirement already satisfied: numpy<2.0,>=1.16.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.18.4)
Requirement already satisfied: termcolor>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.1.0)
Requirement already satisfied: keras-preprocessing>=1.1.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.1.2)
Requirement already satisfied: tensorflow-estimator<2.3.0,>=2.2.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (2.2.0)
Requirement already satisfied: h5py<2.11.0,>=2.10.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (2.10.0)
Requirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.6.3)
Requirement already satisfied: wrapt>=1.11.1 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.12.1)
Requirement already satisfied: grpcio>=1.8.6 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (1.29.0)
Requirement already satisfied: google-pasta>=0.1.8 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.2.0)
Requirement already satisfied: absl-py>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from tensorflow-gpu) (0.9.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf>=3.8.0->tensorflow-gpu) (47.1.1)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.6.0.post3)
Requirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (2.23.0)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (0.4.1)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.7.2)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (3.2.2)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.6/dist-packages (from tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.0.1)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (2020.4.5.1)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (2.9)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests<3,>=2.21.0->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.24.3)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.6/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.3.0)
Requirement already satisfied: cachetools<3.2,>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (3.1.1)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (0.2.8)
Requirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (4.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.6/dist-packages (from markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (1.6.0)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (3.1.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.6/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (0.4.8)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.6/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard<2.3.0,>=2.2.0->tensorflow-gpu) (3.1.0)
Installing collected packages: tensorflow-gpu
Successfully installed tensorflow-gpu-2.2.0
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPool2D, ZeroPadding2D,Dropout, BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import SGD

print(tf.__version__)
2.2.0
import numpy as np
import matplotlib.pyplot as plt

!git clone https://github.com/laxmimerit/dog-cat-full-dataset.git
Cloning into 'dog-cat-full-dataset'...
remote: Enumerating objects: 25027, done.[K
remote: Total 25027 (delta 0), reused 0 (delta 0), pack-reused 25027
Receiving objects: 100% (25027/25027), 541.62 MiB | 35.19 MiB/s, done.
Resolving deltas: 100% (5/5), done.
Checking out files: 100% (25001/25001), done.
test_data_dir ='/content/dog-cat-full-dataset/data/test'
train_data_dir ='/content/dog-cat-full-dataset/data/train'
img_width = 32
img_height =32
batch_size =20
datagen = ImageDataGenerator(rescale=1./255)
train_generator = datagen.flow_from_directory(directory=train_data_dir, 
                                              target_size=(img_width, img_height),
                                              classes = ['dogs', 'cats'],
                                              class_mode = 'binary',
                                              batch_size = batch_size)#將訓練集的數據進行處理
Found 20000 images belonging to 2 classes.
train_generator.classes
array([1, 1, 1, ..., 1, 1, 1], dtype=int32)
validation_generator = datagen.flow_from_directory(directory=test_data_dir, 
                                              target_size=(img_width, img_height),
                                              classes=['dogs', 'cats'],
                                              class_mode='binary', 
                                              batch_size=batch_size)#將驗證集的數據進行處理)
                                                   #注意:這裏原是val_data_dir,應該是test_data_dir
Found 5000 images belonging to 2 classes.
len(train_generator)*batch_size#一組1000個共20組
20000

建立CNN基本模型

model = Sequential()
model.add(Conv2D(filters=64, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform',input_shape = (img_width,img_height,3)))
model.add(MaxPool2D(2,2))

model.add(Flatten())
model.add(Dense(128,activation='relu',kernel_initializer='he_uniform'))
model.add(Dense(1,activation='sigmoid'))

opt = SGD(learning_rate=0.01, momentum=0.9)
model.compile(optimizer = opt, loss ='binary_crossentropy',metrics = ['accuracy'])
history =model.fit_generator(generator=train_generator,steps_per_epoch=len(train_generator), epochs=5,
                    validation_data=validation_generator,validation_steps = len(validation_generator),verbose =1)
Epoch 1/5
1000/1000 [==============================] - 97s 97ms/step - loss: 0.4727 - accuracy: 0.7731 - val_loss: 0.6751 - val_accuracy: 0.6664
Epoch 2/5
1000/1000 [==============================] - 97s 97ms/step - loss: 0.4061 - accuracy: 0.8155 - val_loss: 0.6181 - val_accuracy: 0.7242
Epoch 3/5
1000/1000 [==============================] - 96s 96ms/step - loss: 0.3337 - accuracy: 0.8526 - val_loss: 0.6654 - val_accuracy: 0.7026
Epoch 4/5
1000/1000 [==============================] - 97s 97ms/step - loss: 0.2576 - accuracy: 0.8921 - val_loss: 0.7092 - val_accuracy: 0.7094
Epoch 5/5
1000/1000 [==============================] - 99s 99ms/step - loss: 0.1987 - accuracy: 0.9205 - val_loss: 0.7983 - val_accuracy: 0.7166
history.history
{'accuracy': [0.7731000185012817,
  0.8154500126838684,
  0.852649986743927,
  0.8921499848365784,
  0.9205499887466431],
 'loss': [0.4726707935333252,
  0.4061165750026703,
  0.3337213099002838,
  0.25763893127441406,
  0.19867193698883057],
 'val_accuracy': [0.6664000153541565,
  0.7242000102996826,
  0.7026000022888184,
  0.7093999981880188,
  0.7166000008583069],
 'val_loss': [0.6750754117965698,
  0.6181055903434753,
  0.6653952598571777,
  0.7092159390449524,
  0.7983075976371765]}
def plot_learningCurve(history):#定義繪製函數,與上一節課不一樣
    epoch_range = range(1,6)
    plt.plot(epoch_range, history.history['accuracy'])
    plt.plot(epoch_range, history.history['val_accuracy'])
    plt.title('Model Accuracy')
    plt.xlabel('Epoch')
    plt.ylabel('Accuracy')
    plt.legend(['Train','Val'], loc ='upper left')
    plt.show()

    plt.plot(epoch_range, history.history['loss'])
    plt.plot(epoch_range, history.history['val_loss'])
    plt.title('Model Loss')
    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend(['Train','Val'], loc ='upper left')
    plt.show()

plot_learningCurve(history)

在這裏插入圖片描述
在這裏插入圖片描述

VGG16 模型的建立之前三個塊的建立

model = Sequential()
model.add(Conv2D(filters=64, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform',input_shape = (img_width,img_height,3)))
model.add(MaxPool2D(2,2))

model.add(Conv2D(filters=128, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform'))
model.add(MaxPool2D(2,2))

model.add(Conv2D(filters=256, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform'))
model.add(MaxPool2D(2,2))

model.add(Flatten())
model.add(Dense(128,activation='relu',kernel_initializer='he_uniform'))
model.add(Dense(1,activation='sigmoid'))

opt = SGD(learning_rate=0.01, momentum=0.9)
model.compile(optimizer = opt, loss ='binary_crossentropy',metrics = ['accuracy'])
history =model.fit_generator(generator=train_generator,steps_per_epoch=len(train_generator), epochs=5,
                    validation_data=validation_generator,validation_steps = len(validation_generator),verbose =1)
Epoch 1/5
1000/1000 [==============================] - 203s 203ms/step - loss: 0.6464 - accuracy: 0.6449 - val_loss: 0.6047 - val_accuracy: 0.6784
Epoch 2/5
1000/1000 [==============================] - 201s 201ms/step - loss: 0.5321 - accuracy: 0.7373 - val_loss: 0.5400 - val_accuracy: 0.7294
Epoch 3/5
1000/1000 [==============================] - 201s 201ms/step - loss: 0.4647 - accuracy: 0.7814 - val_loss: 0.4907 - val_accuracy: 0.7668
Epoch 4/5
1000/1000 [==============================] - 201s 201ms/step - loss: 0.4026 - accuracy: 0.8191 - val_loss: 0.4838 - val_accuracy: 0.7646
Epoch 5/5
1000/1000 [==============================] - 202s 202ms/step - loss: 0.3429 - accuracy: 0.8464 - val_loss: 0.5332 - val_accuracy: 0.7492

Batch normalization and Dropout


model = Sequential()
model.add(Conv2D(filters=64, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform',input_shape = (img_width,img_height,3)))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.2))

model.add(Conv2D(filters=128, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.3))

model.add(Conv2D(filters=256, kernel_size=(3,3),padding='same',kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(MaxPool2D(2,2))
model.add(Dropout(0.5))

model.add(Flatten())
model.add(Dense(128,activation='relu',kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Dropout(0.5))

model.add(Dense(1,activation='sigmoid'))

opt = SGD(learning_rate=0.01, momentum=0.9)
model.compile(optimizer = opt, loss ='binary_crossentropy',metrics = ['accuracy'])
history1 =model.fit_generator(generator=train_generator,steps_per_epoch=len(train_generator), epochs=10,
                    validation_data=validation_generator,validation_steps = len(validation_generator),verbose =1)
Epoch 1/10
1000/1000 [==============================] - 228s 228ms/step - loss: 0.6703 - accuracy: 0.6273 - val_loss: 0.6186 - val_accuracy: 0.6654
Epoch 2/10
1000/1000 [==============================] - 229s 229ms/step - loss: 0.6147 - accuracy: 0.6705 - val_loss: 0.6475 - val_accuracy: 0.7078
Epoch 3/10
1000/1000 [==============================] - 227s 227ms/step - loss: 0.5851 - accuracy: 0.6957 - val_loss: 0.5313 - val_accuracy: 0.7428
Epoch 4/10
1000/1000 [==============================] - 228s 228ms/step - loss: 0.5605 - accuracy: 0.7139 - val_loss: 0.6296 - val_accuracy: 0.6810
Epoch 5/10
1000/1000 [==============================] - 231s 231ms/step - loss: 0.5387 - accuracy: 0.7299 - val_loss: 0.5775 - val_accuracy: 0.6906
Epoch 6/10
1000/1000 [==============================] - 230s 230ms/step - loss: 0.5229 - accuracy: 0.7469 - val_loss: 0.5130 - val_accuracy: 0.7530
Epoch 7/10
1000/1000 [==============================] - 229s 229ms/step - loss: 0.5118 - accuracy: 0.7503 - val_loss: 0.4839 - val_accuracy: 0.7710
Epoch 8/10
1000/1000 [==============================] - 229s 229ms/step - loss: 0.5013 - accuracy: 0.7583 - val_loss: 0.5235 - val_accuracy: 0.7394
Epoch 9/10
1000/1000 [==============================] - 229s 229ms/step - loss: 0.4920 - accuracy: 0.7661 - val_loss: 1.0252 - val_accuracy: 0.6678
Epoch 10/10
1000/1000 [==============================] - 228s 228ms/step - loss: 0.4880 - accuracy: 0.7670 - val_loss: 0.6666 - val_accuracy: 0.6802
def plot_learningCurve(history1,epoch):#定義繪製函數,與上一節課不一樣
    epoch_range = range(1,epoch+1)
    plt.plot(epoch_range, history1.history['accuracy'])
    plt.plot(epoch_range, history1.history['val_accuracy'])
    plt.title('Model Accuracy')
    plt.xlabel('Epoch')
    plt.ylabel('Accuracy')
    plt.legend(['Train','Val'], loc ='upper left')
    plt.show()

    plt.plot(epoch_range, history1.history['loss'])
    plt.plot(epoch_range, history1.history['val_loss'])
    plt.title('Model Loss')
    plt.xlabel('Epoch')
    plt.ylabel('Loss')
    plt.legend(['Train','Val'], loc ='upper left')
    plt.show()

plot_learningCurve(history1,10)

在這裏插入圖片描述
在這裏插入圖片描述


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章