Tensorflow線性迴歸的實現

一、首先安裝好TensorFlow1.x:

在這裏插入圖片描述

二、代碼

#%%

import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

#%%

num_points=1000
vectors_set=[]
for i in range(num_points):
    x1=np.random.normal(0.0,0.55)
    y1=x1*0.1+0.3+np.random.normal(0.0,0.03)
    vectors_set.append([x1,y1])
x_data = [v[0] for v in vectors_set]
y_data = [v[1] for v in vectors_set]


plt.scatter(x_data,y_data,c='r')
plt.show()

#%%
# 初始化random_uniform
W = tf.Variable(tf.random.uniform([1],-1.0,1.0),name='W')
b = tf.Variable(tf.zeros([1]),name='b')
y = W*x_data+b

loss = tf.reduce_mean(tf.square(y-y_data),name='loss')
# 採用梯度下降法來優化參數
optimizer = tf.train.GradientDescentOptimizer(0.5)
# 訓練的過程就是最小化這個loss值
train = optimizer.minimize(loss,name='train')

sess=tf.Session()

init = tf.global_variables_initializer()
sess.run(init)

print("W=",sess.run(W),"b=",sess.run(b),",loss=",sess.run(loss))

# 執行20次訓練
for step in range(50):
    sess.run(train)
    print("W=",sess.run(W),"b=",sess.run(b),",loss=",sess.run(loss))
#%%
plt.scatter(x_data,y_data,c='r')
plt.plot(x_data,sess.run(W)*x_data+sess.run(b),c='b')
plt.show()
#%%
  • 首先我們構造了一個線性函數,並畫出了這個線性函數的圖像,爲了訓練的更通俗性,我們添加了一定的噪音;
  • 第二步我們是構建了loss損失函數
  • 第三步使用梯度下降學習法進行優化參數
  • 最後畫出該模型訓練出來的圖像
  • 隨機得到的圖像:
    在這裏插入圖片描述
  • 訓練得到的線性函數:
    在這裏插入圖片描述
    參數值:

W= [0.01236463] b= [0.] ,loss= 0.09335066
W= [0.03935664] b= [0.30034816] ,loss= 0.0020095098
W= [0.05641054] b= [0.3001592] ,loss= 0.0015121858
W= [0.06851513] b= [0.3000398] ,loss= 0.0012616433
W= [0.07710667] b= [0.29995507] ,loss= 0.0011354246
W= [0.08320474] b= [0.29989493] ,loss= 0.001071838
W= [0.087533] b= [0.29985222] ,loss= 0.0010398042
W= [0.09060509] b= [0.29982194] ,loss= 0.0010236664
W= [0.09278558] b= [0.29980043] ,loss= 0.0010155363
W= [0.09433325] b= [0.29978517] ,loss= 0.0010114405
W= [0.09543174] b= [0.29977432] ,loss= 0.0010093772
W= [0.09621143] b= [0.29976663] ,loss= 0.0010083375
W= [0.09676483] b= [0.29976118] ,loss= 0.001007814
W= [0.09715761] b= [0.2997573] ,loss= 0.0010075502
W= [0.09743641] b= [0.29975456] ,loss= 0.0010074173
W= [0.09763429] b= [0.29975262] ,loss= 0.0010073502
W= [0.09777474] b= [0.29975122] ,loss= 0.0010073166
W= [0.09787443] b= [0.29975024] ,loss= 0.0010072995
W= [0.09794518] b= [0.29974955] ,loss= 0.001007291
W= [0.09799541] b= [0.29974905] ,loss= 0.0010072868
W= [0.09803105] b= [0.2997487] ,loss= 0.0010072846
W= [0.09805635] b= [0.29974845] ,loss= 0.0010072835
W= [0.09807431] b= [0.29974827] ,loss= 0.001007283
W= [0.09808706] b= [0.29974815] ,loss= 0.0010072826
W= [0.0980961] b= [0.29974806] ,loss= 0.0010072825
W= [0.09810252] b= [0.299748] ,loss= 0.0010072825
W= [0.09810708] b= [0.29974794] ,loss= 0.0010072824
W= [0.09811032] b= [0.2997479] ,loss= 0.0010072825
W= [0.09811261] b= [0.29974788] ,loss= 0.0010072823
W= [0.09811424] b= [0.29974788] ,loss= 0.0010072823
W= [0.0981154] b= [0.29974785] ,loss= 0.0010072824
W= [0.09811622] b= [0.29974785] ,loss= 0.0010072825
W= [0.0981168] b= [0.29974785] ,loss= 0.0010072824
W= [0.09811722] b= [0.29974785] ,loss= 0.0010072824
W= [0.09811751] b= [0.29974785] ,loss= 0.0010072823
W= [0.09811772] b= [0.29974785] ,loss= 0.0010072823
W= [0.09811787] b= [0.29974785] ,loss= 0.0010072823
W= [0.09811797] b= [0.29974785] ,loss= 0.0010072823
W= [0.09811804] b= [0.29974782] ,loss= 0.0010072824
W= [0.0981181] b= [0.29974782] ,loss= 0.0010072825
W= [0.09811813] b= [0.29974782] ,loss= 0.0010072823
W= [0.09811816] b= [0.29974782] ,loss= 0.0010072823
W= [0.09811818] b= [0.29974782] ,loss= 0.0010072823
W= [0.09811819] b= [0.29974782] ,loss= 0.0010072823
W= [0.0981182] b= [0.29974782] ,loss= 0.0010072823
W= [0.09811821] b= [0.29974782] ,loss= 0.0010072823
W= [0.09811822] b= [0.29974782] ,loss= 0.0010072824
W= [0.09811822] b= [0.29974782] ,loss= 0.0010072824
W= [0.09811822] b= [0.29974782] ,loss= 0.0010072824
W= [0.09811822] b= [0.29974782] ,loss= 0.0010072824
W= [0.09811822] b= [0.29974782] ,loss= 0.0010072824
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章