【深度學習 項目實戰】Keras深度學習多變量時間序列預測的LSTM模型

無意中發現了一個巨牛的人工智能教程,忍不住分享一下給大家。教程不僅是零基礎,通俗易懂,而且非常風趣幽默,像看小說一樣!覺得太牛了,所以分享給大家。點這裏可以跳轉到教程。人工智能教程

本篇文章將介紹基於Keras深度學習的多變量時間序列預測的LSTM模型。

項目名稱:空氣污染預測

一、主要內容:

如何將原始數據集轉換爲可用於時間序列預測的內容。
如何準備數據並使LSTM適合多變量時間序列預測問題。
如何進行預測並將結果重新縮放爲原始單位。

二、數據下載

在本教程中,我們將使用空氣質量數據集。該數據集報告了美國駐中國大使館五年來每小時的天氣和污染水平。數據包括日期時間,稱爲PM2.5濃度的污染以及包括露點,溫度,壓力,風向,風速和雪雨累計小時數的天氣信息。原始數據中的完整功能列表如下:

否:行號
年:此行中的數據年份
month:此行中數據的月份
日期:此行中的數據日期
hour:該行中的數據小時
pm2.5:PM2.5濃度
露點:露點
TEMP:溫度
PRES:壓力
cbwd:組合風向
Iws:累計風速
是:累計下雪時間
Ir:累計下雨時間

我們可以使用這些數據來構建預測問題,在此情況下,鑑於前幾個小時的天氣條件和污染,我們可以預測下一個小時的污染。

數據下載地址
下載數據集並將其命名爲 raw.csv

三、數據處理

第一步,將零散的日期時間信息整合爲一個單一的日期時間,以便我們可以將其用作 Pandas 的索引。
快速檢查第一天的 pm2.5 的 NA 值。因此,我們需要刪除第一行數據。在數據集中還有幾個零散的「NA」值,我們現在可以用 0 值標記它們。

以下腳本用於加載原始數據集,並將日期時間信息解析爲 Pandas DataFrame 索引。「No」列被刪除,每列被指定更加清晰的名稱。最後,將 NA 值替換爲「0」值,並刪除前一天的數據。


# -*- coding: utf-8 -*-
from pandas import *

# 定義字符串轉換爲日期數據
def parse(x):
	return datetime.strptime(x, '%Y %m %d %H')

# 數據存放路徑設置
data_path=r'D:\深度學習\數據集\raw.csv'
# 讀取數據
dataset = read_csv(data_path,sep=',',  parse_dates = [['year', 'month', 'day', 'hour']], index_col=0, date_parser=parse)
# 刪除NO列
dataset.drop('No', axis=1, inplace=True)
# 重命名
dataset.columns = ['pollution', 'dew', 'temp', 'press', 'wnd_dir', 'wnd_spd', 'snow', 'rain']
# 索引重命名
dataset.index.name = 'date'
# 填充NA值爲0
dataset['pollution'].fillna(0, inplace=True)

# 刪除前24行無效數據
dataset = dataset[24:]
# 打印前五行數據
print(dataset.head(5))
# 保存數據
dataset.to_csv(r'D:\深度學習\數據集\pollution.csv')
                   pollution  dew  temp   press wnd_dir  wnd_spd  snow  rain
date                                                                          
2010-01-02 00:00:00      129.0  -16  -4.0  1020.0      SE     1.79     0     0
2010-01-02 01:00:00      148.0  -15  -4.0  1020.0      SE     2.68     0     0
2010-01-02 02:00:00      159.0  -11  -5.0  1021.0      SE     3.57     0     0
2010-01-02 03:00:00      181.0   -7  -5.0  1022.0      SE     5.36     1     0
2010-01-02 04:00:00      138.0   -7  -5.0  1022.0      SE     6.25     2     0

四、建立多變量 LSTM 預測模型

# -*- coding: utf-8 -*-


from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error,r2_score
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM


# 將序列轉換爲監督學習函數
def series_to_supervised(data, n_in=1, n_out=1, dropnan=True):
    n_vars = 1 if type(data) is list else data.shape[1]
    df = DataFrame(data)
    cols, names = list(), list()
    # input sequence (t-n, ... t-1)
    for i in range(n_in, 0, -1):
        cols.append(df.shift(i))
        names += [('var%d(t-%d)' % (j + 1, i)) for j in range(n_vars)]
    # forecast sequence (t, t+1, ... t+n)
    for i in range(0, n_out):
        cols.append(df.shift(-i))
        if i == 0:
            names += [('var%d(t)' % (j + 1)) for j in range(n_vars)]
        else:
            names += [('var%d(t+%d)' % (j + 1, i)) for j in range(n_vars)]
    # put it all together
    agg = concat(cols, axis=1)
    agg.columns = names
    # drop rows with NaN values
    if dropnan:
        agg.dropna(inplace=True)
    return agg


# 導入數據集
dataset = read_csv(r'D:\深度學習\數據集\pollution.csv', header=0, index_col=0)
values = dataset.values

# 離散變量獨熱編碼
encoder = LabelEncoder()
values[:, 4] = encoder.fit_transform(values[:, 4])

# 轉換數據類型
values = values.astype('float32')

# 特徵歸一化爲0-1之間
scaler = MinMaxScaler(feature_range=(0, 1))
scaled = scaler.fit_transform(values)

# 數據轉換爲監督學習數據集
reframed = series_to_supervised(scaled, 1, 1)

# 刪除不需要的列
reframed.drop(reframed.columns[[9, 10, 11, 12, 13, 14, 15]], axis=1, inplace=True)



# 劃分訓練數據集和測試數據集
values = reframed.values
n_train_hours = 365 * 24
train = values[:n_train_hours, :]
test = values[n_train_hours:, :]
train_X, train_y = train[:, :-1], train[:, -1]
test_X, test_y = test[:, :-1], test[:, -1]

# 將輸入(X)重構爲 LSTM 預期的 3D 格式,即 [樣本,時間步,特徵]# reshape input to be 3D [samples, timesteps, features]
train_X = train_X.reshape((train_X.shape[0], 1, train_X.shape[1]))
test_X = test_X.reshape((test_X.shape[0], 1, test_X.shape[1]))

print(train_X.shape, train_y.shape, test_X.shape, test_y.shape)



# 設計lstm模型
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam')
# 訓練模型
history = model.fit(train_X, train_y, epochs=100, batch_size=50, validation_data=(test_X, test_y), verbose=2, shuffle=False)

# 誤差可視化
pyplot.plot(history.history['loss'], label='train')
pyplot.plot(history.history['val_loss'], label='test')
pyplot.legend()
pyplot.show()


# 模型預測
yhat = model.predict(test_X)

# 轉換預測值
test_X = test_X.reshape((test_X.shape[0], test_X.shape[2]))
inv_yhat = concatenate((yhat, test_X[:, 1:]), axis=1)
inv_yhat = scaler.inverse_transform(inv_yhat)
inv_yhat = inv_yhat[:,0]

# 轉換實際值
test_y = test_y.reshape((len(test_y), 1))
inv_y = concatenate((test_y, test_X[:, 1:]), axis=1)
inv_y = scaler.inverse_transform(inv_y)
inv_y = inv_y[:,0]

# 模型評估
# 均方誤差
rmse = sqrt(mean_squared_error(inv_y, inv_yhat))
# R方
r2=r2_score(inv_y, inv_yhat)

print('Test RMSE: %.3f' % rmse)
print('Test R2:%.3f' % r2)



在這裏插入圖片描述

Using TensorFlow backend.
2019-12-19 10:32:46.083137: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
(8760, 1, 8) (8760,) (35039, 1, 8) (35039,)
2019-12-19 10:32:47.305909: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
2019-12-19 10:32:47.333454: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.56
pciBusID: 0000:01:00.0
2019-12-19 10:32:47.333855: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-12-19 10:32:47.334613: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-19 10:32:47.335203: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-12-19 10:32:47.337976: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties: 
name: GeForce GTX 1650 major: 7 minor: 5 memoryClockRate(GHz): 1.56
pciBusID: 0000:01:00.0
2019-12-19 10:32:47.338315: I tensorflow/stream_executor/platform/default/dlopen_checker_stub.cc:25] GPU libraries are statically linked, skip dlopen check.
2019-12-19 10:32:47.339027: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1746] Adding visible gpu devices: 0
2019-12-19 10:32:47.854835: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1159] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-12-19 10:32:47.855022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1165]      0 
2019-12-19 10:32:47.855120: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1178] 0:   N 
2019-12-19 10:32:47.855868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1304] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2919 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1650, pci bus id: 0000:01:00.0, compute capability: 7.5)
Train on 8760 samples, validate on 35039 samples
Epoch 1/100
2019-12-19 10:32:48.783437: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
 - 2s - loss: 0.0606 - val_loss: 0.0485
Epoch 2/100
 - 1s - loss: 0.0347 - val_loss: 0.0372
Epoch 3/100
 - 1s - loss: 0.0180 - val_loss: 0.0230
Epoch 4/100
 - 1s - loss: 0.0157 - val_loss: 0.0165
Epoch 5/100
 - 1s - loss: 0.0149 - val_loss: 0.0147
Epoch 6/100
 - 2s - loss: 0.0149 - val_loss: 0.0145
Epoch 7/100
 - 2s - loss: 0.0146 - val_loss: 0.0147
Epoch 8/100
 - 2s - loss: 0.0147 - val_loss: 0.0147
Epoch 9/100
 - 2s - loss: 0.0146 - val_loss: 0.0150
Epoch 10/100
 - 2s - loss: 0.0144 - val_loss: 0.0155
Epoch 11/100
 - 2s - loss: 0.0149 - val_loss: 0.0148
Epoch 12/100
 - 2s - loss: 0.0149 - val_loss: 0.0151
Epoch 13/100
 - 2s - loss: 0.0146 - val_loss: 0.0150
Epoch 14/100
 - 2s - loss: 0.0147 - val_loss: 0.0149
Epoch 15/100
 - 2s - loss: 0.0146 - val_loss: 0.0147
Epoch 16/100
 - 2s - loss: 0.0151 - val_loss: 0.0154
Epoch 17/100
 - 2s - loss: 0.0150 - val_loss: 0.0154
Epoch 18/100
 - 2s - loss: 0.0148 - val_loss: 0.0152
Epoch 19/100
 - 2s - loss: 0.0149 - val_loss: 0.0153
Epoch 20/100
 - 2s - loss: 0.0148 - val_loss: 0.0157
Epoch 21/100
 - 2s - loss: 0.0147 - val_loss: 0.0156
Epoch 22/100
 - 2s - loss: 0.0147 - val_loss: 0.0157
Epoch 23/100
 - 2s - loss: 0.0147 - val_loss: 0.0158
Epoch 24/100
 - 2s - loss: 0.0147 - val_loss: 0.0156
Epoch 25/100
 - 2s - loss: 0.0146 - val_loss: 0.0154
Epoch 26/100
 - 2s - loss: 0.0146 - val_loss: 0.0155
Epoch 27/100
 - 2s - loss: 0.0146 - val_loss: 0.0155
Epoch 28/100
 - 2s - loss: 0.0146 - val_loss: 0.0148
Epoch 29/100
 - 2s - loss: 0.0147 - val_loss: 0.0149
Epoch 30/100
 - 2s - loss: 0.0146 - val_loss: 0.0156
Epoch 31/100
 - 2s - loss: 0.0146 - val_loss: 0.0151
Epoch 32/100
 - 2s - loss: 0.0146 - val_loss: 0.0152
Epoch 33/100
 - 2s - loss: 0.0146 - val_loss: 0.0150
Epoch 34/100
 - 2s - loss: 0.0145 - val_loss: 0.0149
Epoch 35/100
 - 2s - loss: 0.0147 - val_loss: 0.0147
Epoch 36/100
 - 2s - loss: 0.0145 - val_loss: 0.0148
Epoch 37/100
 - 2s - loss: 0.0145 - val_loss: 0.0147
Epoch 38/100
 - 2s - loss: 0.0146 - val_loss: 0.0146
Epoch 39/100
 - 2s - loss: 0.0145 - val_loss: 0.0146
Epoch 40/100
 - 2s - loss: 0.0145 - val_loss: 0.0143
Epoch 41/100
 - 2s - loss: 0.0144 - val_loss: 0.0143
Epoch 42/100
 - 2s - loss: 0.0145 - val_loss: 0.0143
Epoch 43/100
 - 2s - loss: 0.0146 - val_loss: 0.0144
Epoch 44/100
 - 2s - loss: 0.0145 - val_loss: 0.0141
Epoch 45/100
 - 2s - loss: 0.0144 - val_loss: 0.0139
Epoch 46/100
 - 2s - loss: 0.0146 - val_loss: 0.0140
Epoch 47/100
 - 2s - loss: 0.0146 - val_loss: 0.0140
Epoch 48/100
 - 2s - loss: 0.0143 - val_loss: 0.0138
Epoch 49/100
 - 2s - loss: 0.0145 - val_loss: 0.0140
Epoch 50/100
 - 2s - loss: 0.0143 - val_loss: 0.0139
Epoch 51/100
 - 2s - loss: 0.0142 - val_loss: 0.0138
Epoch 52/100
 - 2s - loss: 0.0142 - val_loss: 0.0140
Epoch 53/100
 - 2s - loss: 0.0146 - val_loss: 0.0139
Epoch 54/100
 - 2s - loss: 0.0144 - val_loss: 0.0138
Epoch 55/100
 - 2s - loss: 0.0145 - val_loss: 0.0138
Epoch 56/100
 - 2s - loss: 0.0145 - val_loss: 0.0138
Epoch 57/100
 - 2s - loss: 0.0144 - val_loss: 0.0136
Epoch 58/100
 - 2s - loss: 0.0145 - val_loss: 0.0137
Epoch 59/100
 - 2s - loss: 0.0143 - val_loss: 0.0137
Epoch 60/100
 - 2s - loss: 0.0141 - val_loss: 0.0137
Epoch 61/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 62/100
 - 2s - loss: 0.0146 - val_loss: 0.0144
Epoch 63/100
 - 2s - loss: 0.0145 - val_loss: 0.0140
Epoch 64/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 65/100
 - 2s - loss: 0.0145 - val_loss: 0.0144
Epoch 66/100
 - 2s - loss: 0.0142 - val_loss: 0.0137
Epoch 67/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 68/100
 - 2s - loss: 0.0142 - val_loss: 0.0137
Epoch 69/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 70/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 71/100
 - 2s - loss: 0.0142 - val_loss: 0.0137
Epoch 72/100
 - 2s - loss: 0.0142 - val_loss: 0.0137
Epoch 73/100
 - 2s - loss: 0.0142 - val_loss: 0.0137
Epoch 74/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 75/100
 - 2s - loss: 0.0143 - val_loss: 0.0138
Epoch 76/100
 - 2s - loss: 0.0144 - val_loss: 0.0137
Epoch 77/100
 - 2s - loss: 0.0144 - val_loss: 0.0136
Epoch 78/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 79/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 80/100
 - 2s - loss: 0.0143 - val_loss: 0.0135
Epoch 81/100
 - 2s - loss: 0.0142 - val_loss: 0.0135
Epoch 82/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 83/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Epoch 84/100
 - 2s - loss: 0.0143 - val_loss: 0.0135
Epoch 85/100
 - 2s - loss: 0.0143 - val_loss: 0.0135
Epoch 86/100
 - 2s - loss: 0.0143 - val_loss: 0.0135
Epoch 87/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 88/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 89/100
 - 2s - loss: 0.0142 - val_loss: 0.0135
Epoch 90/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 91/100
 - 2s - loss: 0.0143 - val_loss: 0.0135
Epoch 92/100
 - 2s - loss: 0.0144 - val_loss: 0.0136
Epoch 93/100
 - 2s - loss: 0.0142 - val_loss: 0.0135
Epoch 94/100
 - 2s - loss: 0.0142 - val_loss: 0.0135
Epoch 95/100
 - 2s - loss: 0.0142 - val_loss: 0.0135
Epoch 96/100
 - 2s - loss: 0.0144 - val_loss: 0.0135
Epoch 97/100
 - 2s - loss: 0.0143 - val_loss: 0.0136
Epoch 98/100
 - 2s - loss: 0.0142 - val_loss: 0.0134
Epoch 99/100
 - 2s - loss: 0.0141 - val_loss: 0.0136
Epoch 100/100
 - 2s - loss: 0.0142 - val_loss: 0.0136
Test RMSE: 26.489
Test R2:0.917

Process finished with exit code 0

評估模型

模型擬合後,我們可以預測整個測試數據集。通過初始預測值和實際值,我們可以計算模型的誤差分數。在這種情況下,我們可以計算出與變量相同的單元誤差的均方根誤差(RMSE)。
以及R方確定係數

Test RMSE: 26.489
Test R2:0.917

模型效果不錯哦

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章