〖TensorFlow2.0笔记21〗自定义数据集(宝可精灵数据集)实现图像分类+补充:tf.where!

自定义数据集(宝可精灵数据集)实现图像分类+补充:tf.where!

一. 数据集介绍以及加载

1.1. 数据集简单描述

  • 我们收集了宝可精灵(动漫)视频片段,从中收集了5种精灵,每个精灵有各种形态的图片。其中:皮卡丘234张图片,超梦239张,杰尼龟223张,小火龙238张,妙蛙种子234张。
  • 数据集划分: 每一类别的所有图片,按照下面这个比例进行划分。其实这个比例并不是针对每个类别进行提取60%(600多张),而是针对总体的1168张图片。

注意: 如果你这个test只划分了10%(就是测试样本比较少)。这样的话,测试性能波动是比较大的,这里10%,大概100多张(每类20多张),还是比较可观的。如果你的数据集非常非常小,测试的时候波动是比较大的。我们为了让测试更加的准确,我们有目的的让test,validation增加到20%。这里我们的数据集大概1000多张,可以看做一个中小规模的数据集。

1.2. 程序实现步骤

  • 总的说来,我们接下来会分为4大步骤:
  • 数据集加载(重要)。
  • 建立模型。
  • 训练、验证和测试。
  • 迁移学习(这里主要针对小样本的数据集,如果你的数据比较少,你又希望获得一个比较好的性能,这里迁移学习就非常有用了。它通过共享其它领域的一些知识,可以帮助你在这个领域只需要少量的数据集,就能取得一个不错的性能)。

1.3. 加载数据的格式

  • 数据加载如下

**注意: ** TensorFlow中map这个函数特别的重要对于数据的预处理,这里把数据地址通过 TensorFlow 自带的函数解析成图片!

1.4. map函数数据处理

  • 下面是处理的核心:

1.5. 自定义数据集处理流程

  • 下面是处理的核心:
  • 根据类别名,来给类别编码。编码为0-4;编码好之后,存储到这样一个字典中。
  • image.csv文件的格式,生成image.csv文件之后,我们下次不再需要从新执行了,只需要直接分析image.csv文件。把第一列的图片路径解析成图片本身。

二. 数据集的预处理工作

2.1. 数据增强Data Augmention

  • 通过上面这种data augmention的方式,可以获得图片成倍的增加。本质上,data augmention可以无限的增加样式,因为可以通过不同的方式进行变化。一般来说左右翻转稳定一些裁剪一点,能去掉一些边缘部分
  • Normalizaion的时候,比如上面的,我们0~255,放缩到0 ~ 1;之前我们也有尝试过把值放缩到-1 ~ 1之间。但是实际上针对图片数据集,我们有一个更加高效的normalization的方式。

2.2. 预处理代码总结

import  os, glob
import  random, csv
import tensorflow as tf

def load_csv(root, filename, name2label):
    """ 加载CSV文件!
    :param root:            root:数据集根目录
    :param filename:        filename:csv文件名
    :param name2label:      name2label:类别名编码表
    :return:
    """
    # 判断.csv文件是否已经存在!
    if not os.path.exists(os.path.join(root, filename)):
        images = []
        for name in name2label.keys():
            # 'pokemon\\mewtwo\\00001.png
            images += glob.glob(os.path.join(root, name, '*.png'))
            images += glob.glob(os.path.join(root, name, '*.jpg'))
            images += glob.glob(os.path.join(root, name, '*.jpeg'))

        # 1167, 'pokemon\\bulbasaur\\00000000.png'
        print(len(images), images)

        random.shuffle(images)
        with open(os.path.join(root, filename), mode='w', newline='') as f:
            writer = csv.writer(f)
            for img in images:  # 'pokemon\\bulbasaur\\00000000.png'
                name = img.split(os.sep)[-2]
                label = name2label[name]
                # 'pokemon\\bulbasaur\\00000000.png', 0  图片路径和标签!
                writer.writerow([img, label])
            print('written into csv file:', filename)

    # read from csv file
    images, labels = [], []
    with open(os.path.join(root, filename)) as f:
        reader = csv.reader(f)
        for row in reader:
            # 'pokemon\\bulbasaur\\00000000.png', 0
            img, label = row
            label = int(label)

            images.append(img)
            labels.append(label)

    assert len(images) == len(labels)

    return images, labels

# 加载pokemon数据集的工具!
def load_pokemon(root, mode='train'):
    """ 加载pokemon数据集的工具!
    :param root:    数据集存储的目录
    :param mode:    mode:当前加载的数据是train,val,还是test
    :return:
    """
    # 创建数字编码表,范围0-4;
    name2label = {}  # "sq...":0   类别名:类标签;  字典 可以看一下目录,一共有5个文件夹,5个类别:0-4范围;
    for name in sorted(os.listdir(os.path.join(root))):     # 列出所有目录;
        if not os.path.isdir(os.path.join(root, name)):
            continue
        # 给每个类别编码一个数字
        name2label[name] = len(name2label.keys())

    # 读取Label信息;保存索引文件images.csv
    # [file1,file2,], 对应的标签[3,1] 2个一一对应的list对象。
    # 根据目录,把每个照片的路径提取出来,以及每个照片路径所对应的类别都存储起来,存储到CSV文件中。
    images, labels = load_csv(root, 'images.csv', name2label)

    # 图片切割成,训练70%,验证15%,测试15%。
    if mode == 'train':                                                     # 70% 训练集
        images = images[:int(0.7 * len(images))]
        labels = labels[:int(0.7 * len(labels))]
    elif mode == 'val':                                                     # 15% = 70%->85%  验证集
        images = images[int(0.7 * len(images)):int(0.85 * len(images))]
        labels = labels[int(0.7 * len(labels)):int(0.85 * len(labels))]
    else:                                                                   # 15% = 70%->85%  测试集
        images = images[int(0.85 * len(images)):]
        labels = labels[int(0.85 * len(labels)):]

    return images, labels, name2label

# 数据normalize
# 下面这2个值均值和方差,怎么得到的。其实是统计所有imagenet的图片(几百万张)的均值和方差;
# 所有者2个数据比较有意义,因为本质上所有图片的分布都和imagenet图片的分布基本一致。
# 这6个数据基本是通用的,网上一搜就能查到。
img_mean = tf.constant([0.485, 0.456, 0.406])
img_std = tf.constant([0.229, 0.224, 0.225])
def normalize(x, mean=img_mean, std=img_std):
    # x shape: [224, 224, 3]
    # mean:shape为1;这里用到了广播机制。我们安装好右边对齐的原则,可以得到如下;
    # mean : [1, 1, 3], std: [3]        先插入1
    # mean : [224, 224, 3], std: [3]    再变为224
    x = (x - mean)/std
    return x

# 数据normalize之后,这里有一个反normalizaion操作。比如数据可视化的时候,需要反过来。
def denormalize(x, mean=img_mean, std=img_std):
    x = x * std + mean
    return x

def preprocess(x,y):
    # x: 图片的路径,
    # y:图片的数字编码
    x = tf.io.read_file(x)                  # 通过图片路径读取图片
    x = tf.image.decode_jpeg(x, channels=3) # RGBA 这里注意有些图片不止3个通道。还有A,透明通道。
    x = tf.image.resize(x, [244, 244])      # 图片重置的,这里224*224,刚好resnet大小匹配的,方便查看。

    # data augmentation, 0~255    首先做一个数据增强!这个操作必须在normalizaion之前(因为是针对图片的。)
    # x = tf.image.random_flip_up_down(x)   # 随机的做一个上和下的翻转。如果全都翻转,相当于没有增加。随机选择一部分翻转。
    x= tf.image.random_flip_left_right(x)   # 随机的做一个左和右的翻转。
    # x = tf.image.random_crop(x, [224, 224, 3]) # 图片裁剪,这里注意这里裁剪到224*224,所以resize不能是224,比如250,250不然什么也没做。

    # x: [0,255]=> 0~1 或者-0.5~0.5   其次:normalizaion
    x = tf.cast(x, dtype=tf.float32) / 255.
    # 0~1 => D(0,1) 调用函数;
    x = normalize(x)

    y = tf.convert_to_tensor(y)

    return x, y

def main():
    import  time
    images, labels, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman', 'train')
    # 图片的路径
    print('images', len(images), images)
    # 图片的标签
    print('labels', len(labels), labels)
    # 编码表,所对应的类别名字。
    print(table)
    # 数据集装载
    db = tf.data.Dataset.from_tensor_slices((images, labels))
    # 数据集预处理
    db = db.shuffle(1000).map(preprocess).batch(32)
    # 我们做一个可视化,图片可视化出来。
    writter = tf.summary.create_file_writer('log')

    for step, (x, y) in enumerate(db):
        # 这里x的大小: [32, 224, 224, 3]
        # 这里y: [32]
        with writter.as_default():
            tf.summary.image('img', x, step=step, max_outputs=9)  # 一次记录9张图片。
            time.sleep(5)                                         # 如果显示感觉太快,每5秒刷新一次batch。

if __name__ == '__main__':
    main()

三. 网络模型的搭建

3.1. TensorFlow2.0中的keras接口

  • TensorFlow2.0中创建模型是相当简单的,因为有TensorFlow2.0中有keras接口,所有的类只需要集成Model类就可以啦。然后在Model基础上,创建一些子单元类,所有的类只需要继承这个Model就可以啦。其次在前向传播的过程中调用这些子单元就可以啦。去完成每个层的前向传播就可以啦。
  • 之前写过Resnet网络结构,尤其写过resnet18这个网络。之前写的resnet属于一个精简版本,有可能是没有18层的。并且输入肯定不是224×224的。当时做了很多的这样修剪的工作。把输入,输出以及通道数量都做了一个精简。下面要介绍的这个就是一个相对比较标准一些的啦。因为Resnet本身没有一个标准的实现。中间有些超参数可以根据经验值自己修改一些。
import  os
import  tensorflow as tf
import  numpy as np
from    tensorflow import keras
from    tensorflow.keras import layers

tf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')

class ResnetBlock(keras.Model):

    def __init__(self, channels, strides=1):
        super(ResnetBlock, self).__init__()

        self.channels = channels
        self.strides = strides

        self.conv1 = layers.Conv2D(channels, 3, strides=strides,
                                   padding=[[0,0],[1,1],[1,1],[0,0]])
        self.bn1 = keras.layers.BatchNormalization()
        self.conv2 = layers.Conv2D(channels, 3, strides=1,
                                   padding=[[0,0],[1,1],[1,1],[0,0]])
        self.bn2 = keras.layers.BatchNormalization()

        if strides!=1:
            self.down_conv = layers.Conv2D(channels, 1, strides=strides, padding='valid')
            self.down_bn = tf.keras.layers.BatchNormalization()

    def call(self, inputs, training=None):
        residual = inputs

        x = self.conv1(inputs)
        x = tf.nn.relu(x)
        x = self.bn1(x, training=training)
        x = self.conv2(x)
        x = tf.nn.relu(x)
        x = self.bn2(x, training=training)

        # 残差连接
        if self.strides!=1:
            residual = self.down_conv(inputs)
            residual = tf.nn.relu(residual)
            residual = self.down_bn(residual, training=training)

        x = x + residual
        x = tf.nn.relu(x)
        return x

# Resnet18的实现。
class ResNet(keras.Model):

    def __init__(self, num_classes, initial_filters=16, **kwargs):
        super(ResNet, self).__init__(**kwargs)

        self.stem = layers.Conv2D(initial_filters, 3, strides=3, padding='valid')

        # 一共包含8个ResnetBlock模块,16+ 根连接的1层 + 输出的1层。
        # 就是分成了4组,每组的第一个完成了高和宽的降维;第二个Strides=1就是维度保持不变。
        self.blocks = keras.models.Sequential([
            ResnetBlock(initial_filters * 2, strides=3),
            ResnetBlock(initial_filters * 2, strides=1),
            # layers.Dropout(rate=0.5),

            ResnetBlock(initial_filters * 4, strides=3),
            ResnetBlock(initial_filters * 4, strides=1),

            ResnetBlock(initial_filters * 8, strides=2),
            ResnetBlock(initial_filters * 8, strides=1),

            ResnetBlock(initial_filters * 16, strides=2),
            ResnetBlock(initial_filters * 16, strides=1),
        ])

        self.final_bn = layers.BatchNormalization()
        self.avg_pool = layers.GlobalMaxPool2D()
        self.fc = layers.Dense(num_classes)             # 全连接层

    def call(self, inputs, training=None):
        # print('x:',inputs.shape)
        out = self.stem(inputs)  # 根链接。
        out = tf.nn.relu(out)

        # print('stem:',out.shape)

        out = self.blocks(out, training=training)
        # print('res:',out.shape)

        out = self.final_bn(out, training=training)
        # out = tf.nn.relu(out)

        out = self.avg_pool(out)

        # print('avg_pool:',out.shape)
        out = self.fc(out)  # 分类层得到一个输出。
        # print('out:',out.shape)
        return out

def main():
    num_classes = 5

    resnet18 = ResNet(5)
    resnet18.build(input_shape=(4,224,224,3))
    resnet18.summary()

if __name__ == '__main__':
    main()

3.2. 网络模型的参数量

  • 输出结果:可以发现网络的参数量有280万,可以训练的参数量,还有不可以训练的参数量。这里不可以训练的参数主要是因为存在BatchNormalization层中,这些层中有一些统计的数据,这些统计的数据是不可以训练的,这部分数据是根据运行的时候统计得到的,不参与网络的反响传播。接下来如何使用它呢?

四. 网络的训练工作

4.1. 小样本很难训练网络

  • 从零开始训练train_scratch.py
import os
import tensorflow as tf
import numpy as np

from tensorflow import keras
from tensorflow.python.keras.api._v2.keras import layers, optimizers, losses
from tensorflow.keras.callbacks import EarlyStopping

tf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')

# 导入一些具体的工具
from pokemon import  load_pokemon, normalize, denormalize
from resnet import ResNet                   # 导入模型

# 预处理的函数,复制过来。
def preprocess(x,y):
    # x: 图片的路径,y:图片的数字编码
    x = tf.io.read_file(x)
    x = tf.image.decode_jpeg(x, channels=3) # RGBA
    x = tf.image.resize(x, [244, 244])

    x = tf.image.random_flip_left_right(x)
    # x = tf.image.random_flip_up_down(x)
    x = tf.image.random_crop(x, [224,224,3])

    # x: [0,255]=> -1~1
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = normalize(x)
    y = tf.convert_to_tensor(y)
    y = tf.one_hot(y, depth=5)

    return x, y

batchsz = 8

# creat train db   一般训练的时候需要shuffle。其它是不需要的。
images, labels, table = load_pokemon('/home/zhangkf/johnCodes/TF2/TF2_8_data/pokeman',mode='train')
db_train = tf.data.Dataset.from_tensor_slices((images, labels))  # 变成个Dataset对象。
db_train = db_train.shuffle(1000).map(preprocess).batch(batchsz) # map函数图片路径变为内容。
# crate validation db
images2, labels2, table = load_pokemon('/home/zhangkf/johnCodes/TF2/TF2_8_data/pokeman',mode='val')
db_val = tf.data.Dataset.from_tensor_slices((images2, labels2))
db_val = db_val.map(preprocess).batch(batchsz)
# create test db
images3, labels3, table = load_pokemon('/home/zhangkf/johnCodes/TF2/TF2_8_data/pokeman',mode='test')
db_test = tf.data.Dataset.from_tensor_slices((images3, labels3))
db_test = db_test.map(preprocess).batch(batchsz)


# 训练样本太小了,resnet网络表达能力很强。这里换成4层小的网络了。
# resnet = keras.Sequential([
#     layers.Conv2D(16,5,3),
#     layers.MaxPool2D(3,3),
#     layers.ReLU(),
#     layers.Conv2D(64,5,3),
#     layers.MaxPool2D(2,2),
#     layers.ReLU(),
#     layers.Flatten(),
#     layers.Dense(64),
#     layers.ReLU(),
#     layers.Dense(5)
# ])

# 首先创建Resnet18
resnet = ResNet(5)
resnet.build(input_shape=(batchsz, 224, 224, 3))
resnet.summary()

# monitor监听器, 连续5个验证准确率不增加,这个事情触发。
early_stopping = EarlyStopping(
    monitor='val_accuracy',
    min_delta=0.001,
    patience=20

)

# 网络的装配。
resnet.compile(optimizer=optimizers.Adam(lr=1e-4),
               loss=losses.CategoricalCrossentropy(from_logits=True),
               metrics=['accuracy'])

# 完成标准的train,val, test;
# 标准的逻辑必须通过db_val挑选模型的参数,就需要提供一个earlystopping技术,
resnet.fit(db_train, validation_data=db_val, validation_freq=1, epochs=1000,
           callbacks=[early_stopping])   # 1个epoch验证1次。触发了这个事情,提前停止了。
resnet.evaluate(db_test)
  • 训练结果
ssh://zhangkf@192.168.136.64:22/home/zhangkf/anaconda3/envs/tf2b/bin/python -u /home/zhangkf/johnCodes/TF2/TF2_8_data/train_scratch.py
WARNING: Logging before flag parsing goes to stderr.
W0828 10:51:31.044445 139752945207040 deprecation.py:323] From /home/zhangkf/anaconda3/envs/tf2b/lib/python3.7/site-packages/tensorflow/python/data/util/random_seed.py:58: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Model: "res_net"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              multiple                  448       
_________________________________________________________________
sequential (Sequential)      multiple                  2797280   
_________________________________________________________________
batch_normalization_20 (Batc multiple                  1024      
_________________________________________________________________
global_max_pooling2d (Global multiple                  0         
_________________________________________________________________
dense (Dense)                multiple                  1285      
=================================================================
Total params: 2,800,037
Trainable params: 2,794,725
Non-trainable params: 5,312
_________________________________________________________________
Epoch 1/1000
88/88 [==============================] - 37s 425ms/step - loss: 2.3255 - accuracy: 0.2472 - val_loss: 1.6504 - val_accuracy: 0.1845
Epoch 2/1000
88/88 [==============================] - 24s 277ms/step - loss: 0.3819 - accuracy: 0.8250 - val_loss: 1.7054 - val_accuracy: 0.1760
Epoch 3/1000
88/88 [==============================] - 24s 273ms/step - loss: 0.0643 - accuracy: 0.9928 - val_loss: 1.7032 - val_accuracy: 0.1931
Epoch 4/1000
88/88 [==============================] - 24s 271ms/step - loss: 0.0198 - accuracy: 0.9949 - val_loss: 1.7639 - val_accuracy: 0.2318
Epoch 5/1000
88/88 [==============================] - 21s 242ms/step - loss: 0.0069 - accuracy: 1.0000 - val_loss: 1.8334 - val_accuracy: 0.2103
Epoch 6/1000
88/88 [==============================] - 23s 266ms/step - loss: 0.0046 - accuracy: 1.0000 - val_loss: 1.8315 - val_accuracy: 0.3004
Epoch 7/1000
88/88 [==============================] - 24s 271ms/step - loss: 0.0036 - accuracy: 1.0000 - val_loss: 1.9369 - val_accuracy: 0.3090
Epoch 8/1000
88/88 [==============================] - 23s 264ms/step - loss: 0.0029 - accuracy: 1.0000 - val_loss: 2.0374 - val_accuracy: 0.2918
Epoch 9/1000
88/88 [==============================] - 24s 268ms/step - loss: 0.0024 - accuracy: 1.0000 - val_loss: 2.0581 - val_accuracy: 0.3219
Epoch 10/1000
88/88 [==============================] - 24s 269ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 2.0795 - val_accuracy: 0.3219
Epoch 11/1000
88/88 [==============================] - 24s 268ms/step - loss: 0.0018 - accuracy: 1.0000 - val_loss: 2.0932 - val_accuracy: 0.3047
Epoch 12/1000
88/88 [==============================] - 24s 272ms/step - loss: 0.0016 - accuracy: 1.0000 - val_loss: 2.0997 - val_accuracy: 0.3047
Epoch 13/1000
88/88 [==============================] - 21s 243ms/step - loss: 0.0014 - accuracy: 1.0000 - val_loss: 2.1042 - val_accuracy: 0.3090
Epoch 14/1000
88/88 [==============================] - 23s 265ms/step - loss: 0.0012 - accuracy: 1.0000 - val_loss: 2.1075 - val_accuracy: 0.3090
Epoch 15/1000
88/88 [==============================] - 24s 270ms/step - loss: 0.0011 - accuracy: 1.0000 - val_loss: 2.1106 - val_accuracy: 0.3090
Epoch 16/1000
88/88 [==============================] - 23s 260ms/step - loss: 0.0010 - accuracy: 1.0000 - val_loss: 2.1136 - val_accuracy: 0.3090
Epoch 17/1000
88/88 [==============================] - 23s 267ms/step - loss: 9.1804e-04 - accuracy: 1.0000 - val_loss: 2.1162 - val_accuracy: 0.3090
Epoch 18/1000
88/88 [==============================] - 24s 268ms/step - loss: 8.3833e-04 - accuracy: 1.0000 - val_loss: 2.1183 - val_accuracy: 0.3090
Epoch 19/1000
88/88 [==============================] - 24s 272ms/step - loss: 7.6828e-04 - accuracy: 1.0000 - val_loss: 2.1211 - val_accuracy: 0.3047
Epoch 20/1000
88/88 [==============================] - 24s 269ms/step - loss: 7.0653e-04 - accuracy: 1.0000 - val_loss: 2.1231 - val_accuracy: 0.3047
Epoch 21/1000
88/88 [==============================] - 24s 268ms/step - loss: 6.5178e-04 - accuracy: 1.0000 - val_loss: 2.1257 - val_accuracy: 0.3047
Epoch 22/1000
88/88 [==============================] - 23s 262ms/step - loss: 6.0310e-04 - accuracy: 1.0000 - val_loss: 2.1278 - val_accuracy: 0.3047
Epoch 23/1000
88/88 [==============================] - 24s 268ms/step - loss: 5.5923e-04 - accuracy: 1.0000 - val_loss: 2.1298 - val_accuracy: 0.3047
Epoch 24/1000
88/88 [==============================] - 23s 267ms/step - loss: 5.1979e-04 - accuracy: 1.0000 - val_loss: 2.1319 - val_accuracy: 0.3047
Epoch 25/1000
88/88 [==============================] - 23s 259ms/step - loss: 4.8411e-04 - accuracy: 1.0000 - val_loss: 2.1338 - val_accuracy: 0.3047
Epoch 26/1000

4.2. 解决小样本难训练的方法

  • 问题:小样本很难训练,我们观察到上面的一个现象,它在Training上面准确率很容易达到100%左右,但是在Val的验证准确率很低20%左右。那就意味着这个网络结构完全是没有训练好的。为什么会这样呢?5个验证准确率没有增加0.01,触发EarlyStopping操作,程序停止。

  • Resnet18上的验证准确率很低,基本是不工作的网络结构,这里主要是因为Resnet网络结构太大了,网络层数比较深,而且网络参数量比较大。有什么办法解决这个问题?第一个最根本重要的是增加数据集imagenet是有几百万图片。这里宝卡梦精灵数据集每个类别才有200多张图片。

  • 第二个办法:对网路做一些约束,比如把网络层数减少,网络参数参数量减少。或者增加一些正则化的手段,或者过拟合的手段。这些都可以尝试一下。

  • 但是我们这里的图片实在是太少了,所以我们这里采用另一种方式,直接换一个比较小型的网络。对于数据集不够的情况下这个小网络往往能发挥出不可预测的效果。我们测试一下。网路小容易训练。

  • 换好网络结构之后的效果如下:测试准确率达到86%左右啦。

ssh://zhangkf@192.168.136.64:22/home/zhangkf/anaconda3/envs/tf2b/bin/python -u /home/zhangkf/johnCodes/TF2/TF2_8_data/train_scratch.py
WARNING: Logging before flag parsing goes to stderr.
W0828 10:33:44.677829 139788463359744 deprecation.py:323] From /home/zhangkf/anaconda3/envs/tf2b/lib/python3.7/site-packages/tensorflow/python/data/util/random_seed.py:58: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              multiple                  1216      
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple                  0         
_________________________________________________________________
re_lu (ReLU)                 multiple                  0         
_________________________________________________________________
conv2d_1 (Conv2D)            multiple                  25664     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple                  0         
_________________________________________________________________
re_lu_1 (ReLU)               multiple                  0         
_________________________________________________________________
flatten (Flatten)            multiple                  0         
_________________________________________________________________
dense (Dense)                multiple                  36928     
_________________________________________________________________
re_lu_2 (ReLU)               multiple                  0         
_________________________________________________________________
dense_1 (Dense)              multiple                  325       
=================================================================
Total params: 64,133
Trainable params: 64,133
Non-trainable params: 0
_________________________________________________________________
Epoch 1/1000
88/88 [==============================] - 23s 266ms/step - loss: 1.4197 - accuracy: 0.3153 - val_loss: 1.1087 - val_accuracy: 0.6223
Epoch 2/1000
88/88 [==============================] - 22s 251ms/step - loss: 0.9204 - accuracy: 0.6749 - val_loss: 0.7547 - val_accuracy: 0.7940
Epoch 3/1000
88/88 [==============================] - 23s 266ms/step - loss: 0.6312 - accuracy: 0.8405 - val_loss: 0.5803 - val_accuracy: 0.8326
Epoch 4/1000
88/88 [==============================] - 19s 221ms/step - loss: 0.4844 - accuracy: 0.8674 - val_loss: 0.4981 - val_accuracy: 0.8369
Epoch 5/1000
88/88 [==============================] - 22s 253ms/step - loss: 0.4089 - accuracy: 0.8874 - val_loss: 0.4607 - val_accuracy: 0.8455
Epoch 6/1000
88/88 [==============================] - 23s 259ms/step - loss: 0.3629 - accuracy: 0.9137 - val_loss: 0.4383 - val_accuracy: 0.8584
Epoch 7/1000
88/88 [==============================] - 21s 243ms/step - loss: 0.3281 - accuracy: 0.9312 - val_loss: 0.4283 - val_accuracy: 0.8498
Epoch 8/1000
88/88 [==============================] - 24s 269ms/step - loss: 0.3019 - accuracy: 0.9329 - val_loss: 0.4155 - val_accuracy: 0.8541
Epoch 9/1000
88/88 [==============================] - 22s 249ms/step - loss: 0.2772 - accuracy: 0.9367 - val_loss: 0.4091 - val_accuracy: 0.8541
Epoch 10/1000
88/88 [==============================] - 23s 265ms/step - loss: 0.2553 - accuracy: 0.9413 - val_loss: 0.4041 - val_accuracy: 0.8541
Epoch 11/1000
88/88 [==============================] - 23s 259ms/step - loss: 0.2344 - accuracy: 0.9513 - val_loss: 0.4001 - val_accuracy: 0.8584
Epoch 12/1000
88/88 [==============================] - 22s 252ms/step - loss: 0.2156 - accuracy: 0.9560 - val_loss: 0.3960 - val_accuracy: 0.8584
Epoch 13/1000
88/88 [==============================] - 23s 257ms/step - loss: 0.1981 - accuracy: 0.9651 - val_loss: 0.3915 - val_accuracy: 0.8627
Epoch 14/1000
88/88 [==============================] - 23s 258ms/step - loss: 0.1804 - accuracy: 0.9686 - val_loss: 0.3909 - val_accuracy: 0.8627
Epoch 15/1000
88/88 [==============================] - 21s 237ms/step - loss: 0.1655 - accuracy: 0.9717 - val_loss: 0.3849 - val_accuracy: 0.8627
Epoch 16/1000
88/88 [==============================] - 24s 269ms/step - loss: 0.1501 - accuracy: 0.9758 - val_loss: 0.3862 - val_accuracy: 0.8627
Epoch 17/1000
88/88 [==============================] - 18s 206ms/step - loss: 0.1362 - accuracy: 0.9784 - val_loss: 0.3825 - val_accuracy: 0.8627
Epoch 18/1000
88/88 [==============================] - 19s 212ms/step - loss: 0.1225 - accuracy: 0.9788 - val_loss: 0.3804 - val_accuracy: 0.8627
Epoch 19/1000
88/88 [==============================] - 22s 254ms/step - loss: 0.1107 - accuracy: 0.9856 - val_loss: 0.3789 - val_accuracy: 0.8712
Epoch 20/1000
88/88 [==============================] - 23s 265ms/step - loss: 0.0996 - accuracy: 0.9883 - val_loss: 0.3785 - val_accuracy: 0.8755
Epoch 21/1000
88/88 [==============================] - 22s 249ms/step - loss: 0.0897 - accuracy: 0.9896 - val_loss: 0.3850 - val_accuracy: 0.8670
Epoch 22/1000
88/88 [==============================] - 22s 251ms/step - loss: 0.0808 - accuracy: 0.9935 - val_loss: 0.3869 - val_accuracy: 0.8670
Epoch 23/1000
88/88 [==============================] - 23s 258ms/step - loss: 0.0726 - accuracy: 0.9937 - val_loss: 0.3926 - val_accuracy: 0.8670
Epoch 24/1000
88/88 [==============================] - 23s 266ms/step - loss: 0.0652 - accuracy: 0.9937 - val_loss: 0.3958 - val_accuracy: 0.8627
Epoch 25/1000
88/88 [==============================] - 23s 260ms/step - loss: 0.0583 - accuracy: 0.9937 - val_loss: 0.3974 - val_accuracy: 0.8670
Epoch 26/1000
88/88 [==============================] - 20s 222ms/step - loss: 0.0521 - accuracy: 0.9958 - val_loss: 0.4033 - val_accuracy: 0.8670
Epoch 27/1000
88/88 [==============================] - 22s 255ms/step - loss: 0.0471 - accuracy: 0.9970 - val_loss: 0.4051 - val_accuracy: 0.8670
Epoch 28/1000
88/88 [==============================] - 23s 260ms/step - loss: 0.0424 - accuracy: 0.9986 - val_loss: 0.4089 - val_accuracy: 0.8627
Epoch 29/1000
88/88 [==============================] - 20s 231ms/step - loss: 0.0380 - accuracy: 0.9994 - val_loss: 0.4088 - val_accuracy: 0.8584
Epoch 30/1000
88/88 [==============================] - 20s 223ms/step - loss: 0.0342 - accuracy: 0.9994 - val_loss: 0.4104 - val_accuracy: 0.8627
Epoch 31/1000
88/88 [==============================] - 23s 264ms/step - loss: 0.0311 - accuracy: 0.9994 - val_loss: 0.4137 - val_accuracy: 0.8627
Epoch 32/1000
88/88 [==============================] - 19s 215ms/step - loss: 0.0278 - accuracy: 0.9994 - val_loss: 0.4183 - val_accuracy: 0.8627
Epoch 33/1000
88/88 [==============================] - 22s 253ms/step - loss: 0.0251 - accuracy: 0.9994 - val_loss: 0.4170 - val_accuracy: 0.8627
Epoch 34/1000
88/88 [==============================] - 22s 253ms/step - loss: 0.0226 - accuracy: 0.9994 - val_loss: 0.4236 - val_accuracy: 0.8584
Epoch 35/1000
88/88 [==============================] - 22s 251ms/step - loss: 0.0205 - accuracy: 0.9994 - val_loss: 0.4269 - val_accuracy: 0.8584
Epoch 36/1000
88/88 [==============================] - 23s 257ms/step - loss: 0.0187 - accuracy: 0.9994 - val_loss: 0.4276 - val_accuracy: 0.8584
Epoch 37/1000
88/88 [==============================] - 23s 264ms/step - loss: 0.0169 - accuracy: 0.9994 - val_loss: 0.4359 - val_accuracy: 0.8584
Epoch 38/1000
88/88 [==============================] - 23s 259ms/step - loss: 0.0153 - accuracy: 0.9994 - val_loss: 0.4345 - val_accuracy: 0.8584
Epoch 39/1000
88/88 [==============================] - 23s 262ms/step - loss: 0.0142 - accuracy: 0.9994 - val_loss: 0.4405 - val_accuracy: 0.8584
Epoch 40/1000
88/88 [==============================] - 21s 244ms/step - loss: 0.0129 - accuracy: 0.9994 - val_loss: 0.4409 - val_accuracy: 0.8627
30/30 [==============================] - 5s 154ms/step - loss: 0.4972 - accuracy: 0.8755

Process finished with exit code 0

五. 深度迁移学习

5.1. 迁移学习介绍+实战

  • 通过上面的测试结果我们可以发现小型网络的效果还很多,这确实是Renset网络的表达能力太强了,并且数据集规模太小了,效果不太好。针对我们这种小型的数据集又希望取得好的效果(也能用上深层次的网络结构),直接训练有时候无法训练起来,这里我们有一个快速的手段叫做迁移学习
  • 代码实战
import os
import tensorflow as tf
import numpy as np

from tensorflow import keras
from tensorflow.python.keras.api._v2.keras import layers, optimizers, losses
# from tensorflow.keras.callbacks import EarlyStopping

tf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')

# 导入一些具体的工具
from pokemon import  load_pokemon, normalize, denormalize
from resnet import ResNet                               # 导入模型

# 预处理的函数,复制过来。
def preprocess(x,y):
    # x: 图片的路径,y:图片的数字编码
    x = tf.io.read_file(x)
    x = tf.image.decode_jpeg(x, channels=3)             # RGBA
    x = tf.image.resize(x, [256, 256])

    x = tf.image.random_flip_left_right(x)
    x = tf.image.random_flip_up_down(x)
    x = tf.image.random_crop(x, [224,224,3])

    # x: [0,255]=> 0~1
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = normalize(x)
    y = tf.convert_to_tensor(y)
    y = tf.one_hot(y, depth=5)

    return x, y

batchsz = 8

# creat train db  一般训练的时候需要shuffle。其它是不需要的。
images, labels, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='train')
db_train = tf.data.Dataset.from_tensor_slices((images, labels))     # 变成个Dataset对象。
db_train = db_train.shuffle(1000).map(preprocess).batch(batchsz)    # map函数图片路径变为内容。
# crate validation db
images2, labels2, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='val')
db_val = tf.data.Dataset.from_tensor_slices((images2, labels2))
db_val = db_val.map(preprocess).batch(batchsz)
# create test db
images3, labels3, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='test')
db_test = tf.data.Dataset.from_tensor_slices((images3, labels3))
db_test = db_test.map(preprocess).batch(batchsz)

# 导入别的已经训练好的网络和参数, 这部分工作在keras网络中提供了一些经典的网络以及经典网络训练好的参数。
# 这里使用Vgg19,还把他的权值导入进来。imagenet训练的1000类,我们就把输出层去掉。
net = keras.applications.VGG19(weights='imagenet', include_top=False,
                               pooling='max')

net.trainable = False;                                  # 把这部分老的网络,不需要参与反向更新。不训练。

newnet = keras.Sequential([net, layers.Dense(5)])

newnet.build(input_shape=(batchsz, 224, 224, 3))
newnet.summary()

# monitor监听器, 连续5个验证准确率不增加,这个事情触发。
# early_stopping:当验证集损失值,连续增加小于0时,持续10个epoch,则终止训练。
early_stopping = keras.callbacks.EarlyStopping(monitor='val_accuracy',
                                               min_delta=0.00001,
                                               patience=10, verbose=1)

# reduce_lr:当评价指标不在提升时,减少学习率,每次减少10%,当验证损失值,持续3次未减少时,则终止训练。
reduce_lr = keras.callbacks.ReduceLROnPlateau(monitor='val_accuracy', factor=0.1,
                                              patience=10, min_lr=0.000001, verbose=1)

# 网络的装配。
newnet.compile(optimizer=optimizers.Adam(lr=1e-4), loss=losses.CategoricalCrossentropy(from_logits=True),
               metrics=['accuracy'])

# 完成标准的train,val, test; 标准的逻辑必须通过db_val挑选模型的参数,就需要提供一个earlystopping技术,
newnet.fit(db_train, validation_data=db_val, validation_freq=1, epochs=500,
           callbacks=[early_stopping, reduce_lr])   # 1个epoch验证1次。触发了这个事情,提前停止了。
newnet.evaluate(db_test)

  • 运行结果
ssh://zhangkf@192.168.136.55:22/home/zhangkf/anaconda3/envs/tf2c/bin/python -u /home/zhangkf/tf/TF2/TF2_8_data/train_transfer.py
WARNING:tensorflow:From /home/zhangkf/anaconda3/envs/tf2c/lib/python3.7/site-packages/tensorflow_core/python/data/util/random_seed.py:58: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg19 (Model)                (None, 512)               20024384  
_________________________________________________________________
dense (Dense)                (None, 5)                 2565      
=================================================================
Total params: 20,026,949
Trainable params: 2,565
Non-trainable params: 20,024,384
_________________________________________________________________
Epoch 1/500
102/102 [==============================] - 23s 229ms/step - loss: 1.9337 - accuracy: 0.2712 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/500
102/102 [==============================] - 20s 200ms/step - loss: 1.5340 - accuracy: 0.3313 - val_loss: 1.5425 - val_accuracy: 0.3429
Epoch 3/500
102/102 [==============================] - 19s 190ms/step - loss: 1.4445 - accuracy: 0.4000 - val_loss: 1.4317 - val_accuracy: 0.3886
Epoch 4/500
102/102 [==============================] - 21s 206ms/step - loss: 1.3586 - accuracy: 0.4356 - val_loss: 1.3326 - val_accuracy: 0.4686
Epoch 5/500
102/102 [==============================] - 21s 207ms/step - loss: 1.2486 - accuracy: 0.5387 - val_loss: 1.2367 - val_accuracy: 0.5314
Epoch 6/500
102/102 [==============================] - 21s 207ms/step - loss: 1.1888 - accuracy: 0.5472 - val_loss: 1.1698 - val_accuracy: 0.5657
Epoch 7/500
102/102 [==============================] - 19s 190ms/step - loss: 1.1045 - accuracy: 0.6160 - val_loss: 1.0988 - val_accuracy: 0.5771
Epoch 8/500
102/102 [==============================] - 20s 195ms/step - loss: 1.0337 - accuracy: 0.6528 - val_loss: 1.0467 - val_accuracy: 0.6343
Epoch 9/500
102/102 [==============================] - 21s 208ms/step - loss: 0.9766 - accuracy: 0.6675 - val_loss: 0.9844 - val_accuracy: 0.6571
Epoch 10/500
102/102 [==============================] - 21s 207ms/step - loss: 0.9453 - accuracy: 0.6957 - val_loss: 0.9329 - val_accuracy: 0.6686
Epoch 11/500
102/102 [==============================] - 22s 211ms/step - loss: 0.8816 - accuracy: 0.7387 - val_loss: 0.8985 - val_accuracy: 0.6629
Epoch 12/500
102/102 [==============================] - 21s 203ms/step - loss: 0.8345 - accuracy: 0.7583 - val_loss: 0.8516 - val_accuracy: 0.7314
Epoch 13/500
102/102 [==============================] - 20s 198ms/step - loss: 0.7950 - accuracy: 0.7779 - val_loss: 0.8211 - val_accuracy: 0.7714
Epoch 14/500
102/102 [==============================] - 19s 187ms/step - loss: 0.7744 - accuracy: 0.7840 - val_loss: 0.7846 - val_accuracy: 0.7771
Epoch 15/500
102/102 [==============================] - 19s 188ms/step - loss: 0.7308 - accuracy: 0.7877 - val_loss: 0.7560 - val_accuracy: 0.7886
Epoch 16/500
102/102 [==============================] - 21s 210ms/step - loss: 0.7201 - accuracy: 0.8049 - val_loss: 0.7321 - val_accuracy: 0.7943
Epoch 17/500
102/102 [==============================] - 19s 189ms/step - loss: 0.6888 - accuracy: 0.8184 - val_loss: 0.7135 - val_accuracy: 0.8000
Epoch 18/500
102/102 [==============================] - 21s 204ms/step - loss: 0.6566 - accuracy: 0.8233 - val_loss: 0.6860 - val_accuracy: 0.8286
Epoch 19/500
102/102 [==============================] - 21s 208ms/step - loss: 0.6666 - accuracy: 0.8172 - val_loss: 0.6698 - val_accuracy: 0.8286
Epoch 20/500
102/102 [==============================] - 21s 203ms/step - loss: 0.6201 - accuracy: 0.8356 - val_loss: 0.6488 - val_accuracy: 0.8343
Epoch 21/500
102/102 [==============================] - 21s 206ms/step - loss: 0.5972 - accuracy: 0.8564 - val_loss: 0.6338 - val_accuracy: 0.8229
Epoch 22/500
102/102 [==============================] - 21s 210ms/step - loss: 0.5730 - accuracy: 0.8589 - val_loss: 0.6138 - val_accuracy: 0.8343
Epoch 23/500
102/102 [==============================] - 20s 192ms/step - loss: 0.5612 - accuracy: 0.8589 - val_loss: 0.6027 - val_accuracy: 0.8571
Epoch 24/500
102/102 [==============================] - 19s 186ms/step - loss: 0.5578 - accuracy: 0.8528 - val_loss: 0.5859 - val_accuracy: 0.8514
Epoch 25/500
102/102 [==============================] - 21s 203ms/step - loss: 0.5541 - accuracy: 0.8724 - val_loss: 0.5740 - val_accuracy: 0.8457
Epoch 26/500
102/102 [==============================] - 18s 175ms/step - loss: 0.5192 - accuracy: 0.8712 - val_loss: 0.5615 - val_accuracy: 0.8400
Epoch 27/500
102/102 [==============================] - 21s 209ms/step - loss: 0.5069 - accuracy: 0.8748 - val_loss: 0.5524 - val_accuracy: 0.8514
Epoch 28/500
102/102 [==============================] - 19s 186ms/step - loss: 0.4829 - accuracy: 0.8834 - val_loss: 0.5423 - val_accuracy: 0.8571
Epoch 29/500
102/102 [==============================] - 20s 194ms/step - loss: 0.4975 - accuracy: 0.8773 - val_loss: 0.5335 - val_accuracy: 0.8571
Epoch 30/500
102/102 [==============================] - 19s 188ms/step - loss: 0.4687 - accuracy: 0.8847 - val_loss: 0.5202 - val_accuracy: 0.8514
Epoch 31/500
102/102 [==============================] - 21s 205ms/step - loss: 0.4637 - accuracy: 0.8834 - val_loss: 0.5124 - val_accuracy: 0.8571
Epoch 32/500
102/102 [==============================] - 21s 203ms/step - loss: 0.4791 - accuracy: 0.8687 - val_loss: 0.5027 - val_accuracy: 0.8571
Epoch 33/500
102/102 [==============================] - 21s 208ms/step - loss: 0.4606 - accuracy: 0.8724 - val_loss: 0.4952 - val_accuracy: 0.8629
Epoch 34/500
102/102 [==============================] - 21s 207ms/step - loss: 0.4491 - accuracy: 0.8798 - val_loss: 0.4883 - val_accuracy: 0.8514
Epoch 35/500
102/102 [==============================] - 19s 187ms/step - loss: 0.4408 - accuracy: 0.8871 - val_loss: 0.4812 - val_accuracy: 0.8629
Epoch 36/500
102/102 [==============================] - 21s 209ms/step - loss: 0.4296 - accuracy: 0.8982 - val_loss: 0.4754 - val_accuracy: 0.8571
Epoch 37/500
102/102 [==============================] - 22s 214ms/step - loss: 0.4021 - accuracy: 0.9117 - val_loss: 0.4693 - val_accuracy: 0.8629
Epoch 38/500
102/102 [==============================] - 21s 203ms/step - loss: 0.4055 - accuracy: 0.9080 - val_loss: 0.4641 - val_accuracy: 0.8571
Epoch 39/500
102/102 [==============================] - 21s 205ms/step - loss: 0.3998 - accuracy: 0.9117 - val_loss: 0.4572 - val_accuracy: 0.8686
Epoch 40/500
102/102 [==============================] - 21s 210ms/step - loss: 0.4020 - accuracy: 0.8982 - val_loss: 0.4535 - val_accuracy: 0.8686
Epoch 41/500
102/102 [==============================] - 20s 199ms/step - loss: 0.3919 - accuracy: 0.9166 - val_loss: 0.4447 - val_accuracy: 0.8743
Epoch 42/500
102/102 [==============================] - 22s 213ms/step - loss: 0.3676 - accuracy: 0.9141 - val_loss: 0.4423 - val_accuracy: 0.8800
Epoch 43/500
102/102 [==============================] - 20s 201ms/step - loss: 0.3720 - accuracy: 0.9092 - val_loss: 0.4341 - val_accuracy: 0.8743
Epoch 44/500
102/102 [==============================] - 21s 204ms/step - loss: 0.3682 - accuracy: 0.9104 - val_loss: 0.4324 - val_accuracy: 0.8857
Epoch 45/500
102/102 [==============================] - 21s 206ms/step - loss: 0.3680 - accuracy: 0.9166 - val_loss: 0.4234 - val_accuracy: 0.8857
Epoch 46/500
102/102 [==============================] - 19s 191ms/step - loss: 0.3553 - accuracy: 0.9141 - val_loss: 0.4211 - val_accuracy: 0.8914
Epoch 47/500
102/102 [==============================] - 18s 172ms/step - loss: 0.3507 - accuracy: 0.9190 - val_loss: 0.4184 - val_accuracy: 0.8914
Epoch 48/500
102/102 [==============================] - 21s 210ms/step - loss: 0.3640 - accuracy: 0.9141 - val_loss: 0.4158 - val_accuracy: 0.8971
Epoch 49/500
102/102 [==============================] - 21s 210ms/step - loss: 0.3378 - accuracy: 0.9239 - val_loss: 0.4075 - val_accuracy: 0.8971
Epoch 50/500
102/102 [==============================] - 21s 209ms/step - loss: 0.3480 - accuracy: 0.9129 - val_loss: 0.4031 - val_accuracy: 0.8914
Epoch 51/500
102/102 [==============================] - 21s 209ms/step - loss: 0.3298 - accuracy: 0.9325 - val_loss: 0.3978 - val_accuracy: 0.8971
Epoch 52/500
102/102 [==============================] - 18s 175ms/step - loss: 0.3354 - accuracy: 0.9227 - val_loss: 0.3940 - val_accuracy: 0.9029
Epoch 53/500
102/102 [==============================] - 21s 210ms/step - loss: 0.3168 - accuracy: 0.9239 - val_loss: 0.3900 - val_accuracy: 0.8971
Epoch 54/500
102/102 [==============================] - 21s 202ms/step - loss: 0.3190 - accuracy: 0.9264 - val_loss: 0.3909 - val_accuracy: 0.9086
Epoch 55/500
102/102 [==============================] - 21s 206ms/step - loss: 0.3206 - accuracy: 0.9264 - val_loss: 0.3866 - val_accuracy: 0.9086
Epoch 56/500
102/102 [==============================] - 19s 184ms/step - loss: 0.3071 - accuracy: 0.9227 - val_loss: 0.3831 - val_accuracy: 0.9029
Epoch 57/500
102/102 [==============================] - 22s 211ms/step - loss: 0.2999 - accuracy: 0.9362 - val_loss: 0.3784 - val_accuracy: 0.9029
Epoch 58/500
102/102 [==============================] - 22s 212ms/step - loss: 0.2993 - accuracy: 0.9276 - val_loss: 0.3777 - val_accuracy: 0.9029
Epoch 59/500
102/102 [==============================] - 21s 208ms/step - loss: 0.3060 - accuracy: 0.9239 - val_loss: 0.3744 - val_accuracy: 0.9143
Epoch 60/500
102/102 [==============================] - 21s 206ms/step - loss: 0.2913 - accuracy: 0.9362 - val_loss: 0.3762 - val_accuracy: 0.9086
Epoch 61/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2801 - accuracy: 0.9325 - val_loss: 0.3692 - val_accuracy: 0.9143
Epoch 62/500
102/102 [==============================] - 21s 204ms/step - loss: 0.3024 - accuracy: 0.9288 - val_loss: 0.3635 - val_accuracy: 0.9029
Epoch 63/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2828 - accuracy: 0.9350 - val_loss: 0.3649 - val_accuracy: 0.9143
Epoch 64/500
102/102 [==============================] - 21s 203ms/step - loss: 0.2768 - accuracy: 0.9448 - val_loss: 0.3578 - val_accuracy: 0.9086
Epoch 65/500
102/102 [==============================] - 19s 187ms/step - loss: 0.2821 - accuracy: 0.9362 - val_loss: 0.3578 - val_accuracy: 0.9143
Epoch 66/500
102/102 [==============================] - 20s 192ms/step - loss: 0.2714 - accuracy: 0.9387 - val_loss: 0.3557 - val_accuracy: 0.9143
Epoch 67/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2644 - accuracy: 0.9448 - val_loss: 0.3569 - val_accuracy: 0.9257
Epoch 68/500
102/102 [==============================] - 22s 211ms/step - loss: 0.2700 - accuracy: 0.9350 - val_loss: 0.3539 - val_accuracy: 0.9143
Epoch 69/500
102/102 [==============================] - 21s 202ms/step - loss: 0.2668 - accuracy: 0.9448 - val_loss: 0.3459 - val_accuracy: 0.9143
Epoch 70/500
102/102 [==============================] - 21s 202ms/step - loss: 0.2727 - accuracy: 0.9288 - val_loss: 0.3489 - val_accuracy: 0.9143
Epoch 71/500
102/102 [==============================] - 20s 194ms/step - loss: 0.2658 - accuracy: 0.9227 - val_loss: 0.3445 - val_accuracy: 0.9029
Epoch 72/500
102/102 [==============================] - 21s 201ms/step - loss: 0.2586 - accuracy: 0.9399 - val_loss: 0.3421 - val_accuracy: 0.9143
Epoch 73/500
102/102 [==============================] - 21s 207ms/step - loss: 0.2546 - accuracy: 0.9399 - val_loss: 0.3439 - val_accuracy: 0.9086
Epoch 74/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2602 - accuracy: 0.9399 - val_loss: 0.3392 - val_accuracy: 0.9143
Epoch 75/500
102/102 [==============================] - 17s 171ms/step - loss: 0.2507 - accuracy: 0.9423 - val_loss: 0.3401 - val_accuracy: 0.9143
Epoch 76/500
102/102 [==============================] - 18s 177ms/step - loss: 0.2480 - accuracy: 0.9411 - val_loss: 0.3362 - val_accuracy: 0.9257
Epoch 77/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2381 - accuracy: 0.9436 - val_loss: 0.3354 - val_accuracy: 0.9143
Epoch 78/500
102/102 [==============================] - 21s 209ms/step - loss: 0.2550 - accuracy: 0.9362 - val_loss: 0.3333 - val_accuracy: 0.9143
Epoch 79/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2428 - accuracy: 0.9423 - val_loss: 0.3319 - val_accuracy: 0.9143
Epoch 80/500
102/102 [==============================] - 20s 200ms/step - loss: 0.2451 - accuracy: 0.9374 - val_loss: 0.3309 - val_accuracy: 0.9143
Epoch 81/500
102/102 [==============================] - 21s 209ms/step - loss: 0.2368 - accuracy: 0.9534 - val_loss: 0.3325 - val_accuracy: 0.9086
Epoch 82/500
102/102 [==============================] - 20s 193ms/step - loss: 0.2211 - accuracy: 0.9497 - val_loss: 0.3303 - val_accuracy: 0.9143
Epoch 83/500
102/102 [==============================] - 19s 182ms/step - loss: 0.2301 - accuracy: 0.9436 - val_loss: 0.3286 - val_accuracy: 0.9143
Epoch 84/500
102/102 [==============================] - 21s 208ms/step - loss: 0.2339 - accuracy: 0.9534 - val_loss: 0.3254 - val_accuracy: 0.9086
Epoch 85/500
102/102 [==============================] - 20s 197ms/step - loss: 0.2253 - accuracy: 0.9436 - val_loss: 0.3277 - val_accuracy: 0.9143
Epoch 86/500
102/102 [==============================] - 21s 203ms/step - loss: 0.2361 - accuracy: 0.9411 - val_loss: 0.3255 - val_accuracy: 0.9086
Epoch 87/500
102/102 [==============================] - 21s 210ms/step - loss: 0.2299 - accuracy: 0.9399 - val_loss: 0.3215 - val_accuracy: 0.9086
Epoch 88/500
102/102 [==============================] - 21s 206ms/step - loss: 0.2198 - accuracy: 0.9607 - val_loss: 0.3256 - val_accuracy: 0.9143
Epoch 89/500
102/102 [==============================] - 21s 203ms/step - loss: 0.2258 - accuracy: 0.9509 - val_loss: 0.3195 - val_accuracy: 0.9086
Epoch 90/500
102/102 [==============================] - 17s 167ms/step - loss: 0.2184 - accuracy: 0.9521 - val_loss: 0.3158 - val_accuracy: 0.9143
Epoch 91/500
102/102 [==============================] - 21s 207ms/step - loss: 0.2180 - accuracy: 0.9485 - val_loss: 0.3214 - val_accuracy: 0.9143
Epoch 92/500
102/102 [==============================] - 21s 206ms/step - loss: 0.2150 - accuracy: 0.9485 - val_loss: 0.3144 - val_accuracy: 0.9143
Epoch 93/500
102/102 [==============================] - 19s 182ms/step - loss: 0.2085 - accuracy: 0.9534 - val_loss: 0.3179 - val_accuracy: 0.9143
Epoch 94/500
102/102 [==============================] - 19s 183ms/step - loss: 0.2297 - accuracy: 0.9497 - val_loss: 0.3131 - val_accuracy: 0.9143
Epoch 95/500
102/102 [==============================] - 19s 184ms/step - loss: 0.2074 - accuracy: 0.9583 - val_loss: 0.3131 - val_accuracy: 0.9143
Epoch 96/500
102/102 [==============================] - 17s 169ms/step - loss: 0.2145 - accuracy: 0.9411 - val_loss: 0.3157 - val_accuracy: 0.9086
Epoch 97/500
101/102 [============================>.] - ETA: 0s - loss: 0.2023 - accuracy: 0.9604
Epoch 00097: ReduceLROnPlateau reducing learning rate to 4.999999873689376e-06.
102/102 [==============================] - 21s 209ms/step - loss: 0.2023 - accuracy: 0.9607 - val_loss: 0.3101 - val_accuracy: 0.9143
Epoch 00097: early stopping
22/22 [==============================] - 3s 158ms/step - loss: 0.3629 - accuracy: 0.8971

Process finished with exit code 0

六. 自己的进一步改进工作

6.1. 可以改进的技巧

import os
import tensorflow as tf
import numpy as np

from tensorflow import keras
from tensorflow.python.keras.api._v2.keras import layers, optimizers, losses
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau

tf.random.set_seed(22)
np.random.seed(22)
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2'
assert tf.__version__.startswith('2.')      # 判断tf的版本是否是以‘2.’开头,如果是,则返回True,否则返回False

# 导入一些具体的工具
from pokemon import  load_pokemon, normalize, denormalize
from resnet import ResNet                               # 导入模型

# 预处理的函数,复制过来。
def preprocess(x,y):
    # x: 图片的路径,y:图片的数字编码
    x = tf.io.read_file(x)
    x = tf.image.decode_jpeg(x, channels=3)             # RGBA
    x = tf.image.resize(x, [256, 256])

    x = tf.image.random_flip_left_right(x)
    # x = tf.image.random_flip_up_down(x)
    x = tf.image.random_brightness(x, max_delta=0.5)    # 在某范围随机调整图片亮度
    x = tf.image.random_contrast(x, 0.1, 0.6)           # 在某范围随机调整图片对比度
    x = tf.image.random_crop(x, [224,224,3])

    # x: [0,255]=> 0~1
    x = tf.cast(x, dtype=tf.float32) / 255.
    x = normalize(x)
    y = tf.convert_to_tensor(y)
    y = tf.one_hot(y, depth=5)

    return x, y

###########################################################################################################
batchsz = 16

# creat train db  一般训练的时候需要shuffle。其它是不需要的。
images, labels, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='train')
db_train = tf.data.Dataset.from_tensor_slices((images, labels))     # 变成个Dataset对象。
db_train = db_train.shuffle(1000).map(preprocess).batch(batchsz)    # map函数图片路径变为内容。
# crate validation db
images2, labels2, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='val')
db_val = tf.data.Dataset.from_tensor_slices((images2, labels2))
db_val = db_val.map(preprocess).batch(batchsz)
# create test db
images3, labels3, table = load_pokemon('/home/zhangkf/tf/TF2/TF2_8_data/pokeman',mode='test')
db_test = tf.data.Dataset.from_tensor_slices((images3, labels3))
db_test = db_test.map(preprocess).batch(batchsz)

###########################################################################################################
# 导入别的已经训练好的网络和参数, 这部分工作在keras网络中提供了一些经典的网络以及经典网络训练好的参数。
# 这里使用Vgg19,还把他的权值导入进来。imagenet训练的1000类,我们就把输出层去掉。
net = keras.applications.VGG19(weights='imagenet',
                               include_top=False,
                               pooling='max')

# net.trainable = False                             # 把这部分老的网络,不需要参与反向更新。不训练。为了更好的适应,我下面让2层可以训练;
for i in range(len(net.layers)-4):                  # print(len(model.layers))=23
    net.layers[i].trainable = False

model = keras.Sequential([net, layers.Dense(5)])

model.build(input_shape=(None, 224, 224, 3))
model.summary()

# early_stopping:monitor监听器,当验证集损失值,连续增加小于0时,持续10个epoch,则终止训练。
early_stopping = EarlyStopping(monitor='val_accuracy',
                               min_delta=0.00001,
                               patience=30, verbose=1)

# reduce_lr:当评价指标不在提升时,减少学习率,每次减少10%,当验证损失值,持续3次未减少时,则终止训练。
reduce_lr = ReduceLROnPlateau(monitor='val_accuracy', factor=0.02,
                              patience=30, min_lr=0.0000001, verbose=1)

###########################################################################################################
model.compile(optimizer=optimizers.Adam(lr=1e-4),
              loss=losses.CategoricalCrossentropy(from_logits=True), metrics=['accuracy'])  # 损失函数

model.fit(db_train, validation_data=db_val, validation_freq=1, epochs=1000,
          initial_epoch=0, callbacks=[early_stopping, reduce_lr])                           # 1个epoch验证1次

model.evaluate(db_test)

  • 运行结果:
ssh://zhangkf@192.168.136.55:22/home/zhangkf/anaconda3/envs/tf2c/bin/python -u /home/zhangkf/tf/TF2/TF2_8_data/train_transfer.py
WARNING:tensorflow:From /home/zhangkf/anaconda3/envs/tf2c/lib/python3.7/site-packages/tensorflow_core/python/data/util/random_seed.py:58: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
vgg19 (Model)                (None, 512)               20024384  
_________________________________________________________________
dense (Dense)                (None, 5)                 2565      
=================================================================
Total params: 20,026,949
Trainable params: 4,722,181
Non-trainable params: 15,304,768
_________________________________________________________________
Epoch 1/1000
44/44 [==============================] - 23s 515ms/step - loss: 0.8220 - accuracy: 0.7182 - val_loss: 0.0000e+00 - val_accuracy: 0.0000e+00
Epoch 2/1000
44/44 [==============================] - 19s 424ms/step - loss: 0.2022 - accuracy: 0.9385 - val_loss: 0.3446 - val_accuracy: 0.8798
Epoch 3/1000
44/44 [==============================] - 18s 401ms/step - loss: 0.1196 - accuracy: 0.9642 - val_loss: 0.2623 - val_accuracy: 0.9313
Epoch 4/1000
44/44 [==============================] - 19s 436ms/step - loss: 0.0686 - accuracy: 0.9871 - val_loss: 0.2385 - val_accuracy: 0.9185
Epoch 5/1000
44/44 [==============================] - 20s 443ms/step - loss: 0.0537 - accuracy: 0.9857 - val_loss: 0.2634 - val_accuracy: 0.9227
Epoch 6/1000
44/44 [==============================] - 19s 427ms/step - loss: 0.0738 - accuracy: 0.9886 - val_loss: 0.2965 - val_accuracy: 0.9185
Epoch 7/1000
44/44 [==============================] - 20s 461ms/step - loss: 0.0663 - accuracy: 0.9900 - val_loss: 0.3312 - val_accuracy: 0.9099
Epoch 8/1000
44/44 [==============================] - 19s 428ms/step - loss: 0.0757 - accuracy: 0.9871 - val_loss: 0.2587 - val_accuracy: 0.9270
Epoch 9/1000
44/44 [==============================] - 21s 467ms/step - loss: 0.0618 - accuracy: 0.9886 - val_loss: 0.2082 - val_accuracy: 0.9442
Epoch 10/1000
44/44 [==============================] - 20s 456ms/step - loss: 0.0597 - accuracy: 0.9843 - val_loss: 0.3669 - val_accuracy: 0.9185
Epoch 11/1000
44/44 [==============================] - 20s 444ms/step - loss: 0.0968 - accuracy: 0.9800 - val_loss: 0.2210 - val_accuracy: 0.9399
Epoch 12/1000
44/44 [==============================] - 19s 440ms/step - loss: 0.0702 - accuracy: 0.9871 - val_loss: 0.2665 - val_accuracy: 0.9356
Epoch 13/1000
44/44 [==============================] - 18s 420ms/step - loss: 0.0507 - accuracy: 0.9886 - val_loss: 0.2004 - val_accuracy: 0.9399
Epoch 14/1000
44/44 [==============================] - 19s 421ms/step - loss: 0.0483 - accuracy: 0.9900 - val_loss: 0.2526 - val_accuracy: 0.9270
Epoch 15/1000
44/44 [==============================] - 19s 442ms/step - loss: 0.0334 - accuracy: 0.9886 - val_loss: 0.2460 - val_accuracy: 0.9227
Epoch 16/1000
44/44 [==============================] - 20s 457ms/step - loss: 0.0555 - accuracy: 0.9900 - val_loss: 0.6035 - val_accuracy: 0.8670
Epoch 17/1000
44/44 [==============================] - 19s 438ms/step - loss: 0.0333 - accuracy: 0.9886 - val_loss: 0.2176 - val_accuracy: 0.9442
Epoch 18/1000
44/44 [==============================] - 18s 416ms/step - loss: 0.0679 - accuracy: 0.9914 - val_loss: 0.2387 - val_accuracy: 0.9399
Epoch 19/1000
44/44 [==============================] - 20s 454ms/step - loss: 0.0813 - accuracy: 0.9857 - val_loss: 0.3338 - val_accuracy: 0.9227
Epoch 20/1000
44/44 [==============================] - 18s 402ms/step - loss: 0.0326 - accuracy: 0.9914 - val_loss: 0.3881 - val_accuracy: 0.8970
Epoch 21/1000
44/44 [==============================] - 19s 437ms/step - loss: 0.0356 - accuracy: 0.9914 - val_loss: 0.3823 - val_accuracy: 0.9227
Epoch 22/1000
44/44 [==============================] - 18s 413ms/step - loss: 0.0394 - accuracy: 0.9914 - val_loss: 0.2497 - val_accuracy: 0.9571
Epoch 23/1000
44/44 [==============================] - 20s 458ms/step - loss: 0.0553 - accuracy: 0.9871 - val_loss: 0.2874 - val_accuracy: 0.9313
Epoch 24/1000
44/44 [==============================] - 18s 411ms/step - loss: 0.0331 - accuracy: 0.9914 - val_loss: 0.2256 - val_accuracy: 0.9442
Epoch 25/1000
44/44 [==============================] - 20s 456ms/step - loss: 0.0344 - accuracy: 0.9928 - val_loss: 0.2680 - val_accuracy: 0.9313
Epoch 26/1000
44/44 [==============================] - 20s 452ms/step - loss: 0.0261 - accuracy: 0.9928 - val_loss: 0.2897 - val_accuracy: 0.9313
Epoch 27/1000
44/44 [==============================] - 20s 459ms/step - loss: 0.0496 - accuracy: 0.9886 - val_loss: 0.3291 - val_accuracy: 0.9227
Epoch 28/1000
44/44 [==============================] - 20s 457ms/step - loss: 0.0770 - accuracy: 0.9871 - val_loss: 0.3221 - val_accuracy: 0.9056
Epoch 29/1000
44/44 [==============================] - 20s 449ms/step - loss: 0.0324 - accuracy: 0.9943 - val_loss: 0.1766 - val_accuracy: 0.9614
Epoch 30/1000
44/44 [==============================] - 18s 408ms/step - loss: 0.0417 - accuracy: 0.9900 - val_loss: 0.2819 - val_accuracy: 0.9227
Epoch 31/1000
44/44 [==============================] - 19s 428ms/step - loss: 0.0350 - accuracy: 0.9900 - val_loss: 0.1817 - val_accuracy: 0.9528
Epoch 32/1000
44/44 [==============================] - 19s 438ms/step - loss: 0.0346 - accuracy: 0.9914 - val_loss: 0.2838 - val_accuracy: 0.9270
Epoch 33/1000
44/44 [==============================] - 19s 430ms/step - loss: 0.0441 - accuracy: 0.9900 - val_loss: 0.2502 - val_accuracy: 0.9313
Epoch 34/1000
44/44 [==============================] - 18s 418ms/step - loss: 0.0187 - accuracy: 0.9928 - val_loss: 0.2004 - val_accuracy: 0.9571
Epoch 35/1000
44/44 [==============================] - 17s 397ms/step - loss: 0.0319 - accuracy: 0.9943 - val_loss: 0.4355 - val_accuracy: 0.9099
Epoch 36/1000
44/44 [==============================] - 20s 447ms/step - loss: 0.0373 - accuracy: 0.9886 - val_loss: 0.1846 - val_accuracy: 0.9571
Epoch 37/1000
44/44 [==============================] - 19s 426ms/step - loss: 0.0275 - accuracy: 0.9943 - val_loss: 0.2332 - val_accuracy: 0.9442
Epoch 38/1000
44/44 [==============================] - 20s 454ms/step - loss: 0.0203 - accuracy: 0.9914 - val_loss: 0.2743 - val_accuracy: 0.9356
Epoch 39/1000
44/44 [==============================] - 19s 430ms/step - loss: 0.0399 - accuracy: 0.9900 - val_loss: 0.2395 - val_accuracy: 0.9356
Epoch 40/1000
44/44 [==============================] - 20s 457ms/step - loss: 0.0305 - accuracy: 0.9914 - val_loss: 0.2900 - val_accuracy: 0.9185
Epoch 41/1000
44/44 [==============================] - 18s 409ms/step - loss: 0.0238 - accuracy: 0.9943 - val_loss: 0.1827 - val_accuracy: 0.9571
Epoch 42/1000
44/44 [==============================] - 18s 411ms/step - loss: 0.0279 - accuracy: 0.9886 - val_loss: 0.2681 - val_accuracy: 0.9399
Epoch 43/1000
44/44 [==============================] - 20s 449ms/step - loss: 0.0192 - accuracy: 0.9914 - val_loss: 0.2340 - val_accuracy: 0.9313
Epoch 44/1000
44/44 [==============================] - 20s 453ms/step - loss: 0.0418 - accuracy: 0.9914 - val_loss: 0.2768 - val_accuracy: 0.9227
Epoch 45/1000
44/44 [==============================] - 18s 400ms/step - loss: 0.0278 - accuracy: 0.9914 - val_loss: 0.1977 - val_accuracy: 0.9313
Epoch 46/1000
44/44 [==============================] - 20s 456ms/step - loss: 0.0279 - accuracy: 0.9943 - val_loss: 0.3983 - val_accuracy: 0.9013
Epoch 47/1000
44/44 [==============================] - 20s 464ms/step - loss: 0.0347 - accuracy: 0.9914 - val_loss: 0.3160 - val_accuracy: 0.9142
Epoch 48/1000
44/44 [==============================] - 20s 447ms/step - loss: 0.0437 - accuracy: 0.9871 - val_loss: 0.2124 - val_accuracy: 0.9442
Epoch 49/1000
44/44 [==============================] - 18s 403ms/step - loss: 0.0286 - accuracy: 0.9900 - val_loss: 0.3201 - val_accuracy: 0.9356
Epoch 50/1000
44/44 [==============================] - 20s 448ms/step - loss: 0.0141 - accuracy: 0.9943 - val_loss: 0.2216 - val_accuracy: 0.9528
Epoch 51/1000
44/44 [==============================] - 19s 442ms/step - loss: 0.0323 - accuracy: 0.9886 - val_loss: 0.2520 - val_accuracy: 0.9485
Epoch 52/1000
44/44 [==============================] - 20s 448ms/step - loss: 0.0215 - accuracy: 0.9886 - val_loss: 0.1760 - val_accuracy: 0.9485
Epoch 53/1000
44/44 [==============================] - 19s 440ms/step - loss: 0.0303 - accuracy: 0.9943 - val_loss: 0.3124 - val_accuracy: 0.9270
Epoch 54/1000
44/44 [==============================] - 20s 446ms/step - loss: 0.0300 - accuracy: 0.9886 - val_loss: 0.2771 - val_accuracy: 0.9356
Epoch 55/1000
44/44 [==============================] - 19s 428ms/step - loss: 0.0173 - accuracy: 0.9928 - val_loss: 0.2744 - val_accuracy: 0.9356
Epoch 56/1000
44/44 [==============================] - 20s 446ms/step - loss: 0.0251 - accuracy: 0.9886 - val_loss: 0.2540 - val_accuracy: 0.9442
Epoch 57/1000
44/44 [==============================] - 18s 410ms/step - loss: 0.0201 - accuracy: 0.9886 - val_loss: 0.2950 - val_accuracy: 0.9356
Epoch 58/1000
44/44 [==============================] - 20s 459ms/step - loss: 0.0238 - accuracy: 0.9914 - val_loss: 0.2186 - val_accuracy: 0.9442
Epoch 59/1000
43/44 [============================>.] - ETA: 0s - loss: 0.0167 - accuracy: 0.9898
Epoch 00059: ReduceLROnPlateau reducing learning rate to 1.9999999494757505e-06.
44/44 [==============================] - 20s 452ms/step - loss: 0.0167 - accuracy: 0.9900 - val_loss: 0.1988 - val_accuracy: 0.9528
Epoch 00059: early stopping
15/15 [==============================] - 5s 319ms/step - loss: 0.2250 - accuracy: 0.9528

Process finished with exit code 0

6.2. 最后的总结工作

七. 补充知识:tf.where

tf.where(
    condition,
    x=None,
    y=None,
    name=None
 )
 Return the elements, either from x or y, depending on the condition.

理解:where嘛,就是要根据条件找到你要的东西。

condition:条件,是一个boolean

x:数据

y:同x维度的数据。

返回,返回符合条件的数据。当条件为真,取x对应的数据;当条件为假,取y对应的数据
  • 例子实战
import tensorflow as tf
import numpy as np

# 定义一个tensor,表示condition,内部数据随机产生
condition = tf.convert_to_tensor(np.random.random([2, 3]), dtype=tf.float32)
print(condition)

# 定义两个tensor,表示原数据
a = tf.ones(shape=[2, 3], name='a')
print(a)
b = tf.zeros(shape=[2, 3], name='b')
print(b)

# 选择大于0.5的数值的座标,并根据condition信息在a和b中选取数据
result = tf.where(condition > 0.5, a, b)

print(result)
  • 运行结果
ssh://zhangkf@192.168.136.64:22/home/zhangkf/anaconda3/envs/tf2c/bin/python -u/home/zhangkf/johnCodes/TF1/test.py
tf.Tensor(
[[0.04526703 0.08822254 0.6437674 ]
 [0.3951503  0.39249578 0.51326084]], shape=(2, 3), dtype=float32)
tf.Tensor(
[[1. 1. 1.]
 [1. 1. 1.]], shape=(2, 3), dtype=float32)
tf.Tensor(
[[0. 0. 0.]
 [0. 0. 0.]], shape=(2, 3), dtype=float32)
tf.Tensor(
[[0. 0. 1.]
 [0. 0. 1.]], shape=(2, 3), dtype=float32)

Process finished with exit code 0
import tensorflow as tf
import numpy as np

# 定义一个tensor,表示condition,内部数据随机产生
condition = tf.constant([1, 2, 2, 4])
print(condition)

# 定义两个tensor,表示原数据
a = tf.constant([[1, 2, 2, 4], [3, 4, 5, 6], [7, 8, 9, 10], [2, 3, 3, 4]])
print(a)

# 选择condition==2所在的座标(哪些行),并根据result_index进行选择a中对应的行。
result_index = tf.where(condition == 2)
result = tf.gather_nd(a, result_index) # 返回a第2行和第3行。
print(result)

  • 运行结果
tf.Tensor([1 2 2 4], shape=(4,), dtype=int32)
tf.Tensor(
[[ 1  2  2  4]
 [ 3  4  5  6]
 [ 7  8  9 10]
 [ 2  3  3  4]], shape=(4, 4), dtype=int32)
tf.Tensor(
[[1]
 [2]], shape=(2, 1), dtype=int64)
tf.Tensor(
[[ 3  4  5  6]
 [ 7  8  9 10]], shape=(2, 4), dtype=int32)

Process finished with exit code 0
  • axis =0/1/-1
  • axis=0:在第一维操作
  • axis=1:在第二维操作
  • axis=-1:在最后一维操作
  • np.argmax()函数为例:
>>> a = np.arange(24).reshape(2,3,4)
>>> a
array([[[ 0,  1,  2,  3],
        [ 4,  5,  6,  7],
        [ 8,  9, 10, 11]],

       [[12, 13, 14, 15],
        [16, 17, 18, 19],
        [20, 21, 22, 23]]])
>>> np.argmax(a,axis = 0)  #shape(3,4)
array([[1, 1, 1, 1],
       [1, 1, 1, 1],
       [1, 1, 1, 1]])
>>> np.argmax(a,axis = 1)  #shape(2,4)
array([[2, 2, 2, 2],
       [2, 2, 2, 2]])
>>> np.argmax(a,axis = -1) #shape(2,3) 
array([[3, 3, 3],
       [3, 3, 3]])

八. 需要全套课程视频+PPT+代码资源可以私聊我!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章