keras學習筆記1——搭建多層感知機

迴歸預測問題:

import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers

x = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(x).ravel()

y[::5] += 3 * (0.5 - np.random.rand(8))

model = keras.Sequential()
model.add(layers.Dense(100, activation="relu", input_dim=1))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.Dense(1))
model.compile(optimizer="adam", loss="mse")
model.fit(x, y, epochs=1000)
y_pre = model.predict(x)

plt.scatter(x, y, color='c', edgecolors='k')
plt.plot(x, y_pre, color='m')
plt.xlabel("x")
plt.ylabel("y")
plt.show()

在這裏插入圖片描述
Dense()所執行的操作即是output = activation(dot(input, kernel) + bias),學過深度學習的一般都能看懂,Dense操作只是構建了一層神經網絡,如果要構建多層神經網絡就要執行多個Dense,Dense中的參數units即爲隱層神經元的個數,也是隱層輸出的維度,參數activation爲該層的激活函數,指定相應梯度下降算法以及損失函數,進行編譯。fit,predict操作類似sklearn中的各種模型的方法,fit中的epochs爲迭代的次數。
也可以修改adma梯度下降法的學習率,程序默認學習率爲0.001

tf.keras.optimizers.Adam(
    learning_rate=0.001,
    beta_1=0.9,
    beta_2=0.999,
    epsilon=1e-07,
    amsgrad=False,
    name="Adam",
    **kwargs
)
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import Adam

x = np.sort(5 * np.random.rand(40, 1), axis=0)
y = np.sin(x).ravel()

y[::5] += 3 * (0.5 - np.random.rand(8))

model = keras.Sequential()
model.add(layers.Dense(100, activation="relu", input_dim=1))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.Dense(1))
# 修改學習率
adam = Adam(learning_rate=0.01)

model.compile(optimizer=adam, loss="mse")
model.fit(x, y, epochs=1000)
y_pre = model.predict(x)

plt.scatter(x, y, color='c', edgecolors='k')
plt.plot(x, y_pre, color='m', linewidth=2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()

二元分類問題:

from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.optimizers import Adam

X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
                           random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
data = linearly_separable = (X, y)
x, y = data
m, n = x.shape
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=.3, random_state=42)

p, q = x_test.shape

model = keras.Sequential()
model.add(layers.Dense(100, input_dim=2))
model.add(layers.Dense(50, activation="relu"))
model.add(layers.Dense(1, activation="sigmoid"))

model.compile(optimizer="adam", loss="binary_crossentropy")
model.fit(x_train, y_train, epochs=1000)
y_pre = model.predict_classes(x_test)
print(y_pre)
print(y_test)

for i in range(m):
    if y[i] == 1:
        plt.scatter(x[i, 0], x[i, 1], color='c', edgecolors='k', s=60)
    else:
        plt.scatter(x[i, 0], x[i, 1], color='m', edgecolors='k', s=60)
        
for i in range(p):
    if y_pre[i] != y_test[i]:
        plt.scatter(x_test[i, 0], x_test[i, 1], color='gold', edgecolors='k', s=60)
plt.xlabel("x1")
plt.ylabel("x2")
plt.show()

在這裏插入圖片描述
最終測試集只有兩個樣本分類錯了。
predict_classes根據最終得到的輸出選擇閾值進而實現分類。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章