【keras-bert 學習筆記】2. 保存、加載預訓練模型,在預訓練模型上添加層做監督訓練(fine tune)

1.預訓練模型,並保存

import os
import tensorflow as tf
from keras_bert import (get_model, compile_model, get_base_dict, gen_batch_inputs)
from indoor_location.utils import get_sentence_pairs


seqence_len = 26  #有效的ap數量
pretrain_datafile_name = "..\\data\\sampleset_data\\trainset_day20-1-8_points20_average_interval_500ms.csv"
MODEL_DIR = "..\\model\\"
pretrained_model_path = MODEL_DIR + "pretrained_bert1.h5"


def bert_indoorlocation_pretrain():

    # 準備訓練集數據和驗證集數據
    sentence_pairs = get_sentence_pairs(pretrain_datafile_name)
    token_dict = get_base_dict()
    for pairs in sentence_pairs:
        for token in pairs[0] + pairs[1]:
            if token not in token_dict:
                token_dict[token] = len(token_dict)
    token_list = list(token_dict.keys())

    x_train, y_train = gen_batch_inputs(
        sentence_pairs,
        token_dict,
        token_list,
        seq_len=seqence_len,
        mask_rate=0.3,
        swap_sentence_rate=1.0,
    )
    x_test, y_test = gen_batch_inputs(
        sentence_pairs,
        token_dict,
        token_list,
        seq_len=seqence_len,
        mask_rate=0.3,
        swap_sentence_rate=1.0,
    )

    config = tf.ConfigProto(allow_soft_placement=True)
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)
    config.gpu_options.allow_growth = True
    
    # 創建session
    with tf.Session(config=config) as session:

        # 構建模型
        model = get_model(
            token_num=len(token_dict),
            head_num=2,
            transformer_num=2,
            embed_dim=12,
            feed_forward_dim=100,
            seq_len=seqence_len,
            pos_num=seqence_len,
            dropout_rate=0.05,
            attention_activation='gelu',
        )

        # 設置模型
        print("compiling model .....")
        compile_model(
            model,
            learning_rate=1e-3,
            decay_steps=30000,
            warmup_steps=10000,
            weight_decay=1e-3,
        )
        model.summary()
       
        # 訓練模型
        print("training network...")
        H = model.fit(x_train, y_train, validation_data=(x_test, y_test),
                      batch_size=32, epochs=1, verbose=2)
        # 保存模型
        model.save(pretrained_model_path) # 這裏保存了模型的結構和參數變量的權重值到h5文件了


bert_indoorlocation_pretrain()

輸出結果:

Using TensorFlow backend.
2020-04-08 13:27:41.211842: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:01:00.0
totalMemory: 11.00GiB freeMemory: 9.03GiB
2020-04-08 13:27:41.212025: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
compiling model .....
2020-04-08 13:27:42.030384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-08 13:27:42.030481: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2020-04-08 13:27:42.030536: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2020-04-08 13:27:42.030685: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8712 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)
Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
Input-Token (InputLayer)        (None, 26)           0                                            
__________________________________________________________________________________________________
Input-Segment (InputLayer)      (None, 26)           0                                            
__________________________________________________________________________________________________
Embedding-Token (TokenEmbedding [(None, 26, 12), (57 684         Input-Token[0][0]                
__________________________________________________________________________________________________
Embedding-Segment (Embedding)   (None, 26, 12)       24          Input-Segment[0][0]              
__________________________________________________________________________________________________
Embedding-Token-Segment (Add)   (None, 26, 12)       0           Embedding-Token[0][0]            
                                                                 Embedding-Segment[0][0]          
__________________________________________________________________________________________________
Embedding-Position (PositionEmb (None, 26, 12)       312         Embedding-Token-Segment[0][0]    
__________________________________________________________________________________________________
Embedding-Dropout (Dropout)     (None, 26, 12)       0           Embedding-Position[0][0]         
__________________________________________________________________________________________________
Embedding-Norm (LayerNormalizat (None, 26, 12)       24          Embedding-Dropout[0][0]          
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       624         Embedding-Norm[0][0]             
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-1-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       0           Embedding-Norm[0][0]             
                                                                 Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       24          Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward (FeedForw (None, 26, 12)       2512        Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward-Dropout ( (None, 26, 12)       0           Encoder-1-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-1-FeedForward-Add (Add) (None, 26, 12)       0           Encoder-1-MultiHeadSelfAttention-
                                                                 Encoder-1-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-1-FeedForward-Norm (Lay (None, 26, 12)       24          Encoder-1-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       624         Encoder-1-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-2-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-1-FeedForward-Norm[0][0] 
                                                                 Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       24          Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward (FeedForw (None, 26, 12)       2512        Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward-Dropout ( (None, 26, 12)       0           Encoder-2-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-2-FeedForward-Add (Add) (None, 26, 12)       0           Encoder-2-MultiHeadSelfAttention-
                                                                 Encoder-2-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-2-FeedForward-Norm (Lay (None, 26, 12)       24          Encoder-2-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
MLM-Dense (Dense)               (None, 26, 12)       156         Encoder-2-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
MLM-Norm (LayerNormalization)   (None, 26, 12)       24          MLM-Dense[0][0]                  
__________________________________________________________________________________________________
Extract (Extract)               (None, 12)           0           Encoder-2-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
MLM-Sim (EmbeddingSimilarity)   (None, 26, 57)       57          MLM-Norm[0][0]                   
                                                                 Embedding-Token[0][1]            
__________________________________________________________________________________________________
Input-Masked (InputLayer)       (None, 26)           0                                            
__________________________________________________________________________________________________
NSP-Dense (Dense)               (None, 12)           156         Extract[0][0]                    
__________________________________________________________________________________________________
MLM (Masked)                    (None, 26, 57)       0           MLM-Sim[0][0]                    
                                                                 Input-Masked[0][0]               
__________________________________________________________________________________________________
NSP (Dense)                     (None, 2)            26          NSP-Dense[0][0]                  
==================================================================================================
Total params: 7,831
Trainable params: 7,831
Non-trainable params: 0
__________________________________________________________________________________________________
training network...
Train on 2999 samples, validate on 2999 samples
Epoch 1/10
 - 5s - loss: 4.7575 - MLM_loss: 4.0206 - NSP_loss: 0.7368 - val_loss: 4.7079 - val_MLM_loss: 4.0104 - val_NSP_loss: 0.6975
Epoch 2/10
 - 2s - loss: 4.7176 - MLM_loss: 3.9974 - NSP_loss: 0.7201 - val_loss: 4.6687 - val_MLM_loss: 3.9731 - val_NSP_loss: 0.6956
Epoch 3/10
 - 2s - loss: 4.6630 - MLM_loss: 3.9494 - NSP_loss: 0.7137 - val_loss: 4.6077 - val_MLM_loss: 3.9124 - val_NSP_loss: 0.6953
Epoch 4/10
 - 2s - loss: 4.5877 - MLM_loss: 3.8834 - NSP_loss: 0.7043 - val_loss: 4.5350 - val_MLM_loss: 3.8402 - val_NSP_loss: 0.6948
Epoch 5/10
 - 2s - loss: 4.5107 - MLM_loss: 3.8078 - NSP_loss: 0.7028 - val_loss: 4.4539 - val_MLM_loss: 3.7595 - val_NSP_loss: 0.6944
Epoch 6/10
 - 2s - loss: 4.4228 - MLM_loss: 3.7230 - NSP_loss: 0.6998 - val_loss: 4.3637 - val_MLM_loss: 3.6695 - val_NSP_loss: 0.6942
Epoch 7/10
 - 2s - loss: 4.3249 - MLM_loss: 3.6280 - NSP_loss: 0.6968 - val_loss: 4.2618 - val_MLM_loss: 3.5671 - val_NSP_loss: 0.6947
Epoch 8/10
 - 2s - loss: 4.2166 - MLM_loss: 3.5193 - NSP_loss: 0.6973 - val_loss: 4.1447 - val_MLM_loss: 3.4496 - val_NSP_loss: 0.6952
Epoch 9/10
 - 2s - loss: 4.0865 - MLM_loss: 3.3931 - NSP_loss: 0.6934 - val_loss: 4.0073 - val_MLM_loss: 3.3134 - val_NSP_loss: 0.6939
Epoch 10/10
 - 2s - loss: 3.9428 - MLM_loss: 3.2481 - NSP_loss: 0.6947 - val_loss: 3.8502 - val_MLM_loss: 3.1565 - val_NSP_loss: 0.6937

Process finished with exit code 0

2、加載預訓練模型,在預訓練模型上添加層做監督訓練(fine tune)

import os
import tensorflow as tf
from keras_bert.backend import keras
from keras_bert.layers import Extract
from keras_bert import (get_model, get_base_dict)
from indoor_location.utils import (get_sentence_pairs, gen_bert_data)

seqence_len = 26  #有效的ap數量
pretrain_datafile_name = "..\\data\\sampleset_data\\trainset_day20-1-8_points20_average_interval_500ms.csv"
train_datafile_name = "..\\data\\sampleset_data\\trainset_day20-1-8_points20_average_interval_500ms.csv"
test_datafile_name = "..\\data\\sampleset_data\\trainset_day20-1-8_points20_average_interval_500ms.csv"

MODEL_DIR = "..\\model\\"
pretrained_model_path = MODEL_DIR + "pretrained_bert1.h5"
trained_model_path = MODEL_DIR + "trained_bert1.h5"

LR = 0.001
EPOCHS = 10
BATCH_SIZE = 128

def bert_indoorlocation_train_with_label():
    config = tf.ConfigProto(allow_soft_placement=True)
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)
    config.gpu_options.allow_growth = True

    # 準備訓練集數據和驗證集數據
    sentence_pairs = get_sentence_pairs(pretrain_datafile_name)
    token_dict = get_base_dict()
    for pairs in sentence_pairs:
        for token in pairs[0] + pairs[1]:
            if token not in token_dict:
                token_dict[token] = len(token_dict)
    token_list = list(token_dict.keys())

    x_train, y_train = gen_bert_data(train_datafile_name, seqence_len)
    x_test, y_test = gen_bert_data(test_datafile_name, seqence_len)
    
    ## 構建模型
    ## 1、獲取input_layer, transformed
    # 這部分實際上是與預訓練模型的結構是一樣的
    input_layer, transformed = get_model(
        token_num=len(token_dict),
        head_num=2,
        transformer_num=2,
        embed_dim=12,
        feed_forward_dim=100,
        seq_len=seqence_len,
        pos_num=seqence_len,
        dropout_rate=0.05,
        attention_activation='gelu',
        training=False,
        trainable=True
    )
    ## 2、搭建後續結構層
    extract_layer = Extract(index=0, name='Extract')(transformed) # 獲取的transformed layer的output shape是(None, 26, 12)要做提取,extract_layer的output shape 是(None, 12) 
    output_layer = keras.layers.Dense(units=2, activation="relu", name="coor_output")(extract_layer) # output_layer 的output shape 是(None, 2) ,即輸出的是batch size個樣本的座標值(x,y),None即是batch size的佔位
    ##3、創建model
    model = keras.models.Model(inputs=input_layer, outputs=output_layer) 

    ## 加載預訓練模型的權重
    model.load_weights(pretrained_model_path, by_name=True) # load_weights函數只會將預訓練模型中的同名參數的權重值加載到新模型中

    ## compile model
    optimizer = keras.optimizers.RMSprop(LR)
    model.compile(
        optimizer=optimizer,
        loss='mse',
        metrics=['mae', 'mse'],
    )
    # log輸出模型結構
    model.summary()

    ## 訓練模型
    model.fit(
        x_train,
        y_train,
        epochs=EPOCHS,
        batch_size=BATCH_SIZE,
    )

    ## 保存模型
    model.save(trained_model_path)
   

bert_indoorlocation_train_with_label()

輸出結果:

Using TensorFlow backend.
2020-04-08 13:04:45.150754: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: 
name: GeForce RTX 2080 Ti major: 7 minor: 5 memoryClockRate(GHz): 1.545
pciBusID: 0000:01:00.0
totalMemory: 11.00GiB freeMemory: 9.03GiB
2020-04-08 13:04:45.150935: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0
2020-04-08 13:04:45.967328: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-04-08 13:04:45.967429: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988]      0 
2020-04-08 13:04:45.967484: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0:   N 
2020-04-08 13:04:45.967631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8712 MB memory) -> physical GPU (device: 0, name: GeForce RTX 2080 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5)
Model: "model_2"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
Input-Token (InputLayer)        (None, 26)           0                                            
__________________________________________________________________________________________________
Input-Segment (InputLayer)      (None, 26)           0                                            
__________________________________________________________________________________________________
Embedding-Token (TokenEmbedding [(None, 26, 12), (57 684         Input-Token[0][0]                
__________________________________________________________________________________________________
Embedding-Segment (Embedding)   (None, 26, 12)       24          Input-Segment[0][0]              
__________________________________________________________________________________________________
Embedding-Token-Segment (Add)   (None, 26, 12)       0           Embedding-Token[0][0]            
                                                                 Embedding-Segment[0][0]          
__________________________________________________________________________________________________
Embedding-Position (PositionEmb (None, 26, 12)       312         Embedding-Token-Segment[0][0]    
__________________________________________________________________________________________________
Embedding-Dropout (Dropout)     (None, 26, 12)       0           Embedding-Position[0][0]         
__________________________________________________________________________________________________
Embedding-Norm (LayerNormalizat (None, 26, 12)       24          Embedding-Dropout[0][0]          
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       624         Embedding-Norm[0][0]             
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-1-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       0           Embedding-Norm[0][0]             
                                                                 Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-MultiHeadSelfAttentio (None, 26, 12)       24          Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward (FeedForw (None, 26, 12)       2512        Encoder-1-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-1-FeedForward-Dropout ( (None, 26, 12)       0           Encoder-1-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-1-FeedForward-Add (Add) (None, 26, 12)       0           Encoder-1-MultiHeadSelfAttention-
                                                                 Encoder-1-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-1-FeedForward-Norm (Lay (None, 26, 12)       24          Encoder-1-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       624         Encoder-1-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-2-MultiHeadSelfAttention[
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       0           Encoder-1-FeedForward-Norm[0][0] 
                                                                 Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-MultiHeadSelfAttentio (None, 26, 12)       24          Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward (FeedForw (None, 26, 12)       2512        Encoder-2-MultiHeadSelfAttention-
__________________________________________________________________________________________________
Encoder-2-FeedForward-Dropout ( (None, 26, 12)       0           Encoder-2-FeedForward[0][0]      
__________________________________________________________________________________________________
Encoder-2-FeedForward-Add (Add) (None, 26, 12)       0           Encoder-2-MultiHeadSelfAttention-
                                                                 Encoder-2-FeedForward-Dropout[0][
__________________________________________________________________________________________________
Encoder-2-FeedForward-Norm (Lay (None, 26, 12)       24          Encoder-2-FeedForward-Add[0][0]  
__________________________________________________________________________________________________
Extract (Extract)               (None, 12)           0           Encoder-2-FeedForward-Norm[0][0] 
__________________________________________________________________________________________________
coor_output (Dense)             (None, 2)            26          Extract[0][0]                    
==================================================================================================
Total params: 7,438
Trainable params: 7,438
Non-trainable params: 0
__________________________________________________________________________________________________
Epoch 1/10

 128/6000 [..............................] - ETA: 1:21 - loss: 91.8831 - mean_absolute_error: 7.6603 - mean_squared_error: 91.8831
 768/6000 [==>...........................] - ETA: 12s - loss: 88.0509 - mean_absolute_error: 7.3686 - mean_squared_error: 88.0509 
1408/6000 [======>.......................] - ETA: 6s - loss: 83.8242 - mean_absolute_error: 7.1678 - mean_squared_error: 83.8242 
2048/6000 [=========>....................] - ETA: 3s - loss: 82.8057 - mean_absolute_error: 7.0919 - mean_squared_error: 82.8057
2688/6000 [============>.................] - ETA: 2s - loss: 81.6956 - mean_absolute_error: 7.0135 - mean_squared_error: 81.6956
3328/6000 [===============>..............] - ETA: 1s - loss: 80.9055 - mean_absolute_error: 6.9641 - mean_squared_error: 80.9055
3968/6000 [==================>...........] - ETA: 1s - loss: 80.0507 - mean_absolute_error: 6.9004 - mean_squared_error: 80.0507
4608/6000 [======================>.......] - ETA: 0s - loss: 78.9523 - mean_absolute_error: 6.8508 - mean_squared_error: 78.9523
5248/6000 [=========================>....] - ETA: 0s - loss: 78.0947 - mean_absolute_error: 6.7932 - mean_squared_error: 78.0947
5888/6000 [============================>.] - ETA: 0s - loss: 77.5967 - mean_absolute_error: 6.7624 - mean_squared_error: 77.5967
6000/6000 [==============================] - 2s 385us/step - loss: 77.4605 - mean_absolute_error: 6.7546 - mean_squared_error: 77.4605
Epoch 2/10

 128/6000 [..............................] - ETA: 0s - loss: 58.4012 - mean_absolute_error: 5.8023 - mean_squared_error: 58.4012
 768/6000 [==>...........................] - ETA: 0s - loss: 67.5508 - mean_absolute_error: 6.2530 - mean_squared_error: 67.5508
1408/6000 [======>.......................] - ETA: 0s - loss: 67.9918 - mean_absolute_error: 6.2387 - mean_squared_error: 67.9918
2048/6000 [=========>....................] - ETA: 0s - loss: 67.5451 - mean_absolute_error: 6.2237 - mean_squared_error: 67.5451
2688/6000 [============>.................] - ETA: 0s - loss: 66.0752 - mean_absolute_error: 6.1720 - mean_squared_error: 66.0752
3328/6000 [===============>..............] - ETA: 0s - loss: 65.2672 - mean_absolute_error: 6.1285 - mean_squared_error: 65.2672
3968/6000 [==================>...........] - ETA: 0s - loss: 64.2984 - mean_absolute_error: 6.0648 - mean_squared_error: 64.2984
4608/6000 [======================>.......] - ETA: 0s - loss: 64.0864 - mean_absolute_error: 6.0427 - mean_squared_error: 64.0864
5248/6000 [=========================>....] - ETA: 0s - loss: 63.6569 - mean_absolute_error: 6.0145 - mean_squared_error: 63.6569
5888/6000 [============================>.] - ETA: 0s - loss: 62.9842 - mean_absolute_error: 5.9755 - mean_squared_error: 62.9842
6000/6000 [==============================] - 1s 91us/step - loss: 62.7482 - mean_absolute_error: 5.9611 - mean_squared_error: 62.7482
Epoch 3/10

 128/6000 [..............................] - ETA: 0s - loss: 61.8413 - mean_absolute_error: 5.7161 - mean_squared_error: 61.8413
 768/6000 [==>...........................] - ETA: 0s - loss: 58.5583 - mean_absolute_error: 5.6651 - mean_squared_error: 58.5583
1408/6000 [======>.......................] - ETA: 0s - loss: 56.1098 - mean_absolute_error: 5.5136 - mean_squared_error: 56.1098
2048/6000 [=========>....................] - ETA: 0s - loss: 56.0229 - mean_absolute_error: 5.5496 - mean_squared_error: 56.0229
2688/6000 [============>.................] - ETA: 0s - loss: 54.2504 - mean_absolute_error: 5.4556 - mean_squared_error: 54.2504
3328/6000 [===============>..............] - ETA: 0s - loss: 53.8540 - mean_absolute_error: 5.4297 - mean_squared_error: 53.8540
3968/6000 [==================>...........] - ETA: 0s - loss: 53.1789 - mean_absolute_error: 5.3913 - mean_squared_error: 53.1789
4608/6000 [======================>.......] - ETA: 0s - loss: 52.3208 - mean_absolute_error: 5.3472 - mean_squared_error: 52.3208
5248/6000 [=========================>....] - ETA: 0s - loss: 51.9508 - mean_absolute_error: 5.3264 - mean_squared_error: 51.9508
5888/6000 [============================>.] - ETA: 0s - loss: 51.1977 - mean_absolute_error: 5.2825 - mean_squared_error: 51.1977
6000/6000 [==============================] - 1s 90us/step - loss: 50.9306 - mean_absolute_error: 5.2682 - mean_squared_error: 50.9306
Epoch 4/10

 128/6000 [..............................] - ETA: 0s - loss: 47.0211 - mean_absolute_error: 5.1100 - mean_squared_error: 47.0211
 768/6000 [==>...........................] - ETA: 0s - loss: 43.6467 - mean_absolute_error: 4.8478 - mean_squared_error: 43.6467
1408/6000 [======>.......................] - ETA: 0s - loss: 43.2985 - mean_absolute_error: 4.8099 - mean_squared_error: 43.2985
1920/6000 [========>.....................] - ETA: 0s - loss: 43.9568 - mean_absolute_error: 4.8462 - mean_squared_error: 43.9568
2560/6000 [===========>..................] - ETA: 0s - loss: 42.5796 - mean_absolute_error: 4.7748 - mean_squared_error: 42.5796
3200/6000 [===============>..............] - ETA: 0s - loss: 42.6068 - mean_absolute_error: 4.7683 - mean_squared_error: 42.6068
3840/6000 [==================>...........] - ETA: 0s - loss: 41.9696 - mean_absolute_error: 4.7167 - mean_squared_error: 41.9696
4480/6000 [=====================>........] - ETA: 0s - loss: 41.5916 - mean_absolute_error: 4.6874 - mean_squared_error: 41.5916
5120/6000 [========================>.....] - ETA: 0s - loss: 41.0328 - mean_absolute_error: 4.6507 - mean_squared_error: 41.0328
5760/6000 [===========================>..] - ETA: 0s - loss: 40.4381 - mean_absolute_error: 4.6071 - mean_squared_error: 40.4381
6000/6000 [==============================] - 1s 92us/step - loss: 40.4613 - mean_absolute_error: 4.6071 - mean_squared_error: 40.4613
Epoch 5/10

 128/6000 [..............................] - ETA: 0s - loss: 34.6435 - mean_absolute_error: 4.0585 - mean_squared_error: 34.6435
 768/6000 [==>...........................] - ETA: 0s - loss: 34.5895 - mean_absolute_error: 4.1533 - mean_squared_error: 34.5895
1408/6000 [======>.......................] - ETA: 0s - loss: 34.7968 - mean_absolute_error: 4.1752 - mean_squared_error: 34.7968
2048/6000 [=========>....................] - ETA: 0s - loss: 33.7800 - mean_absolute_error: 4.1122 - mean_squared_error: 33.7800
2560/6000 [===========>..................] - ETA: 0s - loss: 33.9821 - mean_absolute_error: 4.1116 - mean_squared_error: 33.9821
3200/6000 [===============>..............] - ETA: 0s - loss: 33.1974 - mean_absolute_error: 4.0503 - mean_squared_error: 33.1974
3840/6000 [==================>...........] - ETA: 0s - loss: 32.9066 - mean_absolute_error: 4.0170 - mean_squared_error: 32.9066
4480/6000 [=====================>........] - ETA: 0s - loss: 32.3775 - mean_absolute_error: 3.9836 - mean_squared_error: 32.3775
5120/6000 [========================>.....] - ETA: 0s - loss: 31.6515 - mean_absolute_error: 3.9247 - mean_squared_error: 31.6515
5760/6000 [===========================>..] - ETA: 0s - loss: 31.2151 - mean_absolute_error: 3.8888 - mean_squared_error: 31.2151
6000/6000 [==============================] - 1s 91us/step - loss: 31.3043 - mean_absolute_error: 3.8934 - mean_squared_error: 31.3043
Epoch 6/10

 128/6000 [..............................] - ETA: 0s - loss: 22.0898 - mean_absolute_error: 3.3049 - mean_squared_error: 22.0898
 768/6000 [==>...........................] - ETA: 0s - loss: 27.1893 - mean_absolute_error: 3.6002 - mean_squared_error: 27.1893
1408/6000 [======>.......................] - ETA: 0s - loss: 26.1150 - mean_absolute_error: 3.4905 - mean_squared_error: 26.1150
2048/6000 [=========>....................] - ETA: 0s - loss: 24.8919 - mean_absolute_error: 3.3737 - mean_squared_error: 24.8919
2688/6000 [============>.................] - ETA: 0s - loss: 25.2188 - mean_absolute_error: 3.3919 - mean_squared_error: 25.2188
3328/6000 [===============>..............] - ETA: 0s - loss: 25.0102 - mean_absolute_error: 3.3684 - mean_squared_error: 25.0102
3968/6000 [==================>...........] - ETA: 0s - loss: 24.6771 - mean_absolute_error: 3.3395 - mean_squared_error: 24.6771
4608/6000 [======================>.......] - ETA: 0s - loss: 24.6107 - mean_absolute_error: 3.3208 - mean_squared_error: 24.6107
5248/6000 [=========================>....] - ETA: 0s - loss: 24.1618 - mean_absolute_error: 3.2718 - mean_squared_error: 24.1618
5888/6000 [============================>.] - ETA: 0s - loss: 24.0293 - mean_absolute_error: 3.2588 - mean_squared_error: 24.0293
6000/6000 [==============================] - 1s 92us/step - loss: 23.9929 - mean_absolute_error: 3.2512 - mean_squared_error: 23.9929
Epoch 7/10

 128/6000 [..............................] - ETA: 0s - loss: 21.6996 - mean_absolute_error: 2.8900 - mean_squared_error: 21.6996
 768/6000 [==>...........................] - ETA: 0s - loss: 20.4574 - mean_absolute_error: 2.8867 - mean_squared_error: 20.4574
1408/6000 [======>.......................] - ETA: 0s - loss: 20.1095 - mean_absolute_error: 2.8219 - mean_squared_error: 20.1095
2048/6000 [=========>....................] - ETA: 0s - loss: 19.8047 - mean_absolute_error: 2.8250 - mean_squared_error: 19.8047
2688/6000 [============>.................] - ETA: 0s - loss: 19.4552 - mean_absolute_error: 2.8015 - mean_squared_error: 19.4552
3200/6000 [===============>..............] - ETA: 0s - loss: 19.3751 - mean_absolute_error: 2.7975 - mean_squared_error: 19.3751
3840/6000 [==================>...........] - ETA: 0s - loss: 19.0592 - mean_absolute_error: 2.7730 - mean_squared_error: 19.0592
4480/6000 [=====================>........] - ETA: 0s - loss: 18.8030 - mean_absolute_error: 2.7618 - mean_squared_error: 18.8030
5120/6000 [========================>.....] - ETA: 0s - loss: 18.3989 - mean_absolute_error: 2.7319 - mean_squared_error: 18.3989
5760/6000 [===========================>..] - ETA: 0s - loss: 18.2053 - mean_absolute_error: 2.7238 - mean_squared_error: 18.2053
6000/6000 [==============================] - 1s 92us/step - loss: 18.0863 - mean_absolute_error: 2.7146 - mean_squared_error: 18.0863
Epoch 8/10

 128/6000 [..............................] - ETA: 0s - loss: 13.8538 - mean_absolute_error: 2.4690 - mean_squared_error: 13.8538
 768/6000 [==>...........................] - ETA: 0s - loss: 14.5617 - mean_absolute_error: 2.4808 - mean_squared_error: 14.5617
1408/6000 [======>.......................] - ETA: 0s - loss: 15.5544 - mean_absolute_error: 2.5659 - mean_squared_error: 15.5544
2048/6000 [=========>....................] - ETA: 0s - loss: 14.9070 - mean_absolute_error: 2.5173 - mean_squared_error: 14.9070
2688/6000 [============>.................] - ETA: 0s - loss: 14.4333 - mean_absolute_error: 2.4841 - mean_squared_error: 14.4333
3328/6000 [===============>..............] - ETA: 0s - loss: 14.2224 - mean_absolute_error: 2.4680 - mean_squared_error: 14.2224
3968/6000 [==================>...........] - ETA: 0s - loss: 13.9197 - mean_absolute_error: 2.4464 - mean_squared_error: 13.9197
4608/6000 [======================>.......] - ETA: 0s - loss: 13.7211 - mean_absolute_error: 2.4380 - mean_squared_error: 13.7211
5248/6000 [=========================>....] - ETA: 0s - loss: 13.6667 - mean_absolute_error: 2.4423 - mean_squared_error: 13.6667
5888/6000 [============================>.] - ETA: 0s - loss: 13.5377 - mean_absolute_error: 2.4396 - mean_squared_error: 13.5377
6000/6000 [==============================] - 1s 91us/step - loss: 13.5543 - mean_absolute_error: 2.4414 - mean_squared_error: 13.5543
Epoch 9/10

 128/6000 [..............................] - ETA: 0s - loss: 12.1362 - mean_absolute_error: 2.3601 - mean_squared_error: 12.1362
 768/6000 [==>...........................] - ETA: 0s - loss: 12.2603 - mean_absolute_error: 2.3658 - mean_squared_error: 12.2603
1408/6000 [======>.......................] - ETA: 0s - loss: 11.4784 - mean_absolute_error: 2.3294 - mean_squared_error: 11.4784
2048/6000 [=========>....................] - ETA: 0s - loss: 11.1778 - mean_absolute_error: 2.2968 - mean_squared_error: 11.1778
2688/6000 [============>.................] - ETA: 0s - loss: 11.0046 - mean_absolute_error: 2.2863 - mean_squared_error: 11.0046
3328/6000 [===============>..............] - ETA: 0s - loss: 10.8692 - mean_absolute_error: 2.2805 - mean_squared_error: 10.8692
3968/6000 [==================>...........] - ETA: 0s - loss: 10.6651 - mean_absolute_error: 2.2615 - mean_squared_error: 10.6651
4608/6000 [======================>.......] - ETA: 0s - loss: 10.4864 - mean_absolute_error: 2.2449 - mean_squared_error: 10.4864
5248/6000 [=========================>....] - ETA: 0s - loss: 10.3132 - mean_absolute_error: 2.2288 - mean_squared_error: 10.3132
5888/6000 [============================>.] - ETA: 0s - loss: 10.2276 - mean_absolute_error: 2.2204 - mean_squared_error: 10.2276
6000/6000 [==============================] - 1s 92us/step - loss: 10.1615 - mean_absolute_error: 2.2117 - mean_squared_error: 10.1615
Epoch 10/10

 128/6000 [..............................] - ETA: 0s - loss: 8.9600 - mean_absolute_error: 2.1293 - mean_squared_error: 8.9600
 768/6000 [==>...........................] - ETA: 0s - loss: 8.3443 - mean_absolute_error: 2.0512 - mean_squared_error: 8.3443
1408/6000 [======>.......................] - ETA: 0s - loss: 8.0303 - mean_absolute_error: 2.0181 - mean_squared_error: 8.0303
2048/6000 [=========>....................] - ETA: 0s - loss: 8.0200 - mean_absolute_error: 2.0209 - mean_squared_error: 8.0200
2688/6000 [============>.................] - ETA: 0s - loss: 7.8114 - mean_absolute_error: 1.9953 - mean_squared_error: 7.8114
3200/6000 [===============>..............] - ETA: 0s - loss: 7.7712 - mean_absolute_error: 1.9919 - mean_squared_error: 7.7712
3840/6000 [==================>...........] - ETA: 0s - loss: 7.6235 - mean_absolute_error: 1.9730 - mean_squared_error: 7.6235
4480/6000 [=====================>........] - ETA: 0s - loss: 7.5487 - mean_absolute_error: 1.9592 - mean_squared_error: 7.5487
5120/6000 [========================>.....] - ETA: 0s - loss: 7.4881 - mean_absolute_error: 1.9583 - mean_squared_error: 7.4881
5760/6000 [===========================>..] - ETA: 0s - loss: 7.3515 - mean_absolute_error: 1.9427 - mean_squared_error: 7.3515
6000/6000 [==============================] - 1s 91us/step - loss: 7.2762 - mean_absolute_error: 1.9342 - mean_squared_error: 7.2762

Process finished with exit code 0

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章