《動手學習深度學習》筆記之多項式擬合

本文學習自李沐大神的《動手學習深度學習》系列,代碼實現基於MXNet

代碼預熱

簡要列舉大意,具體參見官方API
gluon.nn.Sequential() : 逐層堆疊網絡
gluon.loss.L2Loss(): 計算均方誤差
nd.random.normal(): 生成正態分佈隨機數
gluon.nn.Dense(): 全連接神經網絡層
Block.initialize(): 初始化參數
gluon.data.ArrayDataset(): 從數組或者列表等創建數據集

要解決的問題

使用神經網絡擬合一個三次多項式
y=1.2×x3.4×x2+5.6×x3+5+noisey=1.2 \times x - 3.4 \times x^2 + 5.6 \times x^3 + 5 + noise
並在此過程中使用模擬數據體驗欠擬合與過擬合。

模擬數據

from mxnet import ndarray as nd

num_train = 100
num_test = 100
true_w = [1.2, -3.4, 5.6]
true_b = 5.0
# 依據真實的參數生成帶噪聲的模擬數據
x = nd.random.normal(shape=[num_train+num_test, 1])
X = nd.concat(x, nd.power(x, 2), nd.power(x, 3))
y = true_w[0]*X[:, 0] + true_w[1]*X[:, 1] + true_w[2]*W[:, 2] + true_b
y = 0.1 * nd.random.normal(shape=y.shape)
y_train, y_test = y[:num_train], y[num_train:]

定義訓練與測試步驟

  1. 測試
def test(net, x, y):
	return square_loss(net(x), y)
  1. 訓練
from mxnet import autograd


def train(x_train, x_test, y_train, y_test):
	# 模型
	net = gluon.nn.Sequential()
	with net.name_scope():
		net.add(gluon.nn.Dense(1))
	net.initialize()
	# 超參數
	learning_rate = 0.01
	epochs = 100
	batch_size = 100
	dataset_train = gluon.data.ArrayDataset(x_train, y_train)
	data_iter_train = gluon.data.DataLoader(dataset_train, batch_size, shuffle=True)
	# 優化器
	trainer = gluon.Trainer(net.collect_params(), "sgd", {"learning_rate": learning_rate})
	# 損失函數
	square_loss = gluon.loss.L2Loss()
	# 訓練
	train_loss = []
	test_loss = []
	for e in range(epochs):
		for data, label in data_iter_train:
			with autograd.record():
				output = net(data)
				loss = square_loss(output, label)
			loss.backward()
			trainer.step(batch_size)
		train_loss.append(square_loss(net(x_train), y_train).mean().asscalar())
		test_loss.append(square_loss(net(x_test), y_test).mean().asscalar())
	plt.plot(train_loss)
	plt.plot(test_loss)
	plt.show()
	print("learned w", net[0].weight.data(), "learned b", net[0].bias.data())


train(X[:num_train], X[num_train:], y[:num_train], y[num_train:])
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章