mxnet3——線性迴歸

只利用ndarray 和 autograd

生成訓練集

X = nd.random_normal(shape=(num_examples, num_inputs))
y = true_w[0] * X[:, 0] + true_w[1] * X[:, 1] + true_b
y += .01 * nd.random_normal(shape=y.shape)

def data_iter():
    idx = list(range(num_examples))
    for i in range(0, num_examples, batch_size):
        j = nd.array(idx[i:min(i+batch_size,num_examples)])
  • datch_size 每次從訓練集抽取的數目

模型

def net(X):
    return nd.dot(X, w) + b

損失函數

def square_loss(yhat, y):
    return (yhat - y.reshape(yhat.shape)) ** 2
  • 輸出值和真實值的區別

優化函數

def SGD(params, lr):
    for param in params:
        param[:] = param - lr * param.grad
  • 按梯度的反方向,走lr個步長,參數改變

迭代

for e in range(epochs):    
    total_loss = 0
    print ('e',e);
    for data, label in data_iter():
        with autograd.record():
            output = net(data)
            loss = square_loss(output, label)
        loss.backward()
        SGD(params, learning_rate)
        total_loss += nd.sum(loss).asscalar()
        niter +=1
        curr_loss = nd.mean(loss).asscalar()
        moving_loss = (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss

        est_loss = moving_loss/(1-(1-smoothing_constant)**niter)
  • epochs:迭代次數。每一次迭代中,按batchsize的大小取訓練集中的數據,直到取完。

mxnet

訓練集

X = nd.random_normal(shape=(num_examples,num_inputs))
y = true_w[0] * X[:,0] + true_w[1] * X[:,1] + true_b
y += .01 * nd.random_normal(shape=y.shape)

batch_size = 10
dataset = gluon.data.ArrayDataset(X,y)
data_iter = gluon.data.DataLoader(dataset,batch_size,shuffle=True)

網絡

net  = gluon.nn.Sequential() # 建立一個空的模型
net.add(gluon.nn.Dense(1)) # 加入一個Dense,輸出節點爲1
net.initialize() # 初始化參數

損失函數

square_loss = gluon.loss.L2Loss()

優化函數

trainer = gluon.Trainer(net.collect_params(),'sgd',{'learning_rate':0.1})

迭代

for e in range(epochs):
total_loss = 0
for data,label in data_iter:
    with autograd.record():
        output = net(data)
        loss = square_loss(output,label)
    loss.backward()
    trainer.step(batch_size)
    total_loss += nd.sum(loss).asscalar()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章