西瓜書《機器學習》第三章部分課後題

題目3.2

試證明,對於參數w\bm{w},對率迴歸的目標函數(3.18)是非凸的,但其對數似然函數(3.27)是凸的。

答案可參考:https://blog.csdn.net/icefire_tyh/article/details/52069025

題目3.3

編程實現對率迴歸,並給出西瓜數據集3.0α3.0 \alpha上的結果。

TensorFlow版本

代碼參考自:https://blog.csdn.net/qq_25366173/article/details/80223523

import tensorflow as tf
import matplotlib.pyplot as plt

data_x = [[0.697, 0.460], [0.774, 0.376], [0.634, 0.264], [0.608, 0.318], [0.556, 0.215], [0.403, 0.237], [0.481, 0.149], [0.437, 0.211],
          [0.666, 0.091], [0.243, 0.267], [0.245, 0.057], [0.343, 0.099], [0.639, 0.161], [0.657, 0.198], [0.360, 0.370], [0.593, 0.042], [0.719, 0.103]]
data_y = [[1], [1], [1], [1], [1], [1], [1], [1], [0], [0], [0], [0], [0], [0], [0], [0], [0]]

W = tf.compat.v1.get_variable(name="weight", dtype=tf.compat.v1.float32, shape=[2, 1])
b = tf.compat.v1.get_variable(name="bias", dtype=tf.compat.v1.float32, shape=[])
x = tf.compat.v1.placeholder(name="x_input", dtype=tf.compat.v1.float32, shape=[None, 2])
y_ = tf.compat.v1.placeholder(name="y_output", dtype=tf.compat.v1.float32, shape=[None, 1])
ty = tf.compat.v1.matmul(x, W) + b
y = tf.compat.v1.sigmoid(tf.compat.v1.matmul(x, W) + b)
loss = tf.compat.v1.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=ty, labels=y_))
trainer = tf.compat.v1.train.AdamOptimizer(0.04).minimize(loss)

with tf.compat.v1.Session() as sess:
    steps = 500
    sess.run(tf.compat.v1.global_variables_initializer())
    for i in range(steps):
        sess.run(trainer, feed_dict={x : data_x, y_ : data_y})

    for i in range(len(data_x)):
        if data_y[i] == [1]:
            plt.plot(data_x[i][0], data_x[i][1], 'ob')
        else:
            plt.plot(data_x[i][0], data_x[i][1], '^g')
    [[w_0, w_1], b_] = sess.run([W, b])
    w_0 = w_0[0]
    w_1 = w_1[0]
    x_0 = -b_ / w_0 #(x_0, 0)
    x_1 = -b_ / w_1 #(0, x_1)
    plt.plot([x_0, 0], [0, x_1])
    plt.show()

運行結果如下:
在這裏插入圖片描述
TensorFlow快速入門:
(1)令人困惑的TensorFlow!
(2)令人困惑的 TensorFlow!(II)

造輪子版本

import numpy as np
import math
import matplotlib.pyplot as plt

data_x = [[0.697, 0.460], [0.774, 0.376], [0.634, 0.264], [0.608, 0.318], [0.556, 0.215], [0.403, 0.237],
          [0.481, 0.149], [0.437, 0.211],
          [0.666, 0.091], [0.243, 0.267], [0.245, 0.057], [0.343, 0.099], [0.639, 0.161], [0.657, 0.198],
          [0.360, 0.370], [0.593, 0.042], [0.719, 0.103]]
data_y = [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]

def combine(beta, x):
    x = np.mat(x + [1.]).T
    return beta.T * x

def predict(beta, x):
    return 1 / (1 + math.exp(-combine(beta, x)))

def p1(beta, x):
    return math.exp(combine(beta, x)) / (1 + math.exp(combine(beta, x)))

beta = np.mat([0.] * 3).T

steps = 50

for step in range(steps):
    param_1 = np.zeros((3, 1))
    for i in range(len(data_x)):
        x = np.mat(data_x[i] + [1.]).T
        param_1 = param_1 - x * (data_y[i] - p1(beta, data_x[i]))
    param_2 = np.zeros((3, 3))
    for i in range(len(data_x)):
        x = np.mat(data_x[i] + [1.]).T
        param_2 = param_2 + x * x.T * p1(beta, data_x[i]) * (1 - p1(beta, data_x[i]))
    last_beta = beta
    beta = last_beta - param_2.I * param_1
    if np.linalg.norm(last_beta.T - beta.T) < 1e-6:
        print(step)
        break

for i in range(len(data_x)):
    if data_y[i] == 1:
        plt.plot(data_x[i][0], data_x[i][1], 'ob')
    else:
        plt.plot(data_x[i][0], data_x[i][1], '^g')
w_0 = beta[0, 0]
w_1 = beta[1, 0]
b = beta[2, 0]
print(w_0, w_1, b)
x_0 = -b / w_0 #(x_0, 0)
x_1 = -b / w_1 #(0, x_1)
plt.plot([x_0, 0], [0, x_1])
plt.show()

運行結果如下:
在這裏插入圖片描述
(有興趣自己造輪子且喜歡嘗試優化代碼的同學可以參考吳恩達關於向量化的公開課視頻:deeplearning.ai,網易公開課)

題目3.5

編程實現線性判別分析,並給出西瓜數據集3.0α3.0 \alpha上的結果。

import numpy as np
import math
import matplotlib.pyplot as plt

data_x = [[0.697, 0.460], [0.774, 0.376], [0.634, 0.264], [0.608, 0.318], [0.556, 0.215], [0.403, 0.237],
          [0.481, 0.149], [0.437, 0.211],
          [0.666, 0.091], [0.243, 0.267], [0.245, 0.057], [0.343, 0.099], [0.639, 0.161], [0.657, 0.198],
          [0.360, 0.370], [0.593, 0.042], [0.719, 0.103]]
data_y = [1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]

mu_0 = np.mat([0., 0.]).T
mu_1 = np.mat([0., 0.]).T
count_0 = 0
count_1 = 0
for i in range(len(data_x)):
    x = np.mat(data_x[i]).T
    if data_y[i] == 1:
        mu_1 = mu_1 + x
        count_1 = count_1 + 1
    else:
        mu_0 = mu_0 + x
        count_0 = count_0 + 1
mu_0 = mu_0 / count_0
mu_1 = mu_1 / count_1

S_w = np.mat([[0, 0], [0, 0]])
for i in range(len(data_x)):
    # 注意:西瓜書的輸入向量是列向量形式
    x = np.mat(data_x[i]).T
    if data_y[i] == 0:
        S_w = S_w + (x - mu_0) * (x - mu_0).T
    else:
        S_w = S_w + (x - mu_1) * (x - mu_1).T

u, sigmav, vt = np.linalg.svd(S_w)
sigma = np.zeros([len(sigmav), len(sigmav)])
for i in range(len(sigmav)):
    sigma[i][i] = sigmav[i]
sigma = np.mat(sigma)
S_w_inv = vt.T * sigma.I * u.T
w = S_w_inv * (mu_0 - mu_1)

w_0 = w[0, 0]
w_1 = w[1, 0]
tan = w_1 / w_0
sin = w_1 / math.sqrt(w_0 ** 2 + w_1 ** 2)
cos = w_0 / math.sqrt(w_0 ** 2 + w_1 ** 2)

print(w_0, w_1)

for i in range(len(data_x)):
    if data_y[i] == 0:
        plt.plot(data_x[i][0], data_x[i][1], "go")
    else:
        plt.plot(data_x[i][0], data_x[i][1], "b^")
plt.plot(mu_0[0, 0], mu_0[1, 0], "ro")
plt.plot(mu_1[0, 0], mu_1[1, 0], "r^")
plt.plot([-0.1, 0.1], [-0.1 * tan, 0.1 * tan])

for i in range(len(data_x)):
    x = np.mat(data_x[i]).T
    ell = w.T * x
    ell = ell[0, 0]
    if data_y[i] == 0:
        plt.scatter(cos * ell, sin * ell, marker='o', c='', edgecolors='g')
    else:
        plt.scatter(cos * ell, sin * ell, marker='^', c='', edgecolors='b')
plt.show()

運行結果如下:
在這裏插入圖片描述
放大投影部分,效果如下:
在這裏插入圖片描述

降維後的分類超平面爲上圖紅色箭頭所指點(一維空間下通過學習得到該點,大於該點值的爲一類,小於該點值的爲另一類),與題目3.3的運行結果對應,藍色點有3個分類錯誤,綠色點有2個分類錯誤。從這道題的實踐可見,高維線性不可分,降維後依舊線性不可分。

題目3.6

線性判別分析僅在線性可分數據上能獲得理想結果,試設計一個改進方法,使其能較好地用於非線性可分數據。

在當前維度下數據線性不可分,降維後依舊線性不可分,那麼升維呢?即將當前數據x\bm{x}映射到更高維度上ϕ(x)\phi(\bm{x})。跟着題目3.5的思路,這回從一維變到二維,看看效果如何。

考慮x1=(0.1)\bm{x}_1 = (0.1)x2=(0.35)\bm{x}_2 = (-0.35)x3=(3)\bm{x}_3 = (3)x4=(4.1)\bm{x}_4 = (-4.1)x5=(2.7)\bm{x}_5= (2.7),對應標籤爲y1=0y_1 = 0y2=0y_2 = 0y3=1y_3 = 1y4=1y_4 = 1y5=1y_5 = 1,此時數據在一維空間下明顯線性不可分。若設計(x1,x2)=ϕ(x)=(x1,x12)(x&#x27;_1, x&#x27;_2) = \phi(\bm{x}) = (x_1, x_1^2),可以得到下圖:
在這裏插入圖片描述
此時數據在二維空間下是線性可分的(紅色線即爲分類超平面)。

若能夠找到合適的映射函數ϕ()\phi(\cdot),則可解決低維空間數據線性不可分問題,然而在實戰中,找到這樣一個合適的映射函數並非易事,預習到第6章SVM可以看到核函數κ\kappa的作用,這裏不再擴展。

Acknowledge

題目3.2參考自:
https://blog.csdn.net/icefire_tyh/article/details/52069025
感謝@四去六進一

題目3.3參考自:
https://blog.csdn.net/qq_25366173/article/details/80223523
感謝@Liubinxiao
https://blog.csdn.net/da_kao_la/article/details/81908154
感謝@da_kao_la

TensorFlow入門參考自:
https://mp.weixin.qq.com/s/JVSxFFIyW4yCuV1LoIKM3g
感謝@Jacob Buckman、@機器之心
https://mp.weixin.qq.com/s/P8oJV1UUr0cHQ9iulcPAmw
感謝@Jacob Buckman、@機器之心

題目3.5參考自:
https://blog.csdn.net/macunshi/article/details/80756016
感謝@言寺之風雅頌
https://blog.51cto.com/13959448/2327130
感謝@myhaspl
https://www.cnblogs.com/Jerry-Dong/p/8177094.html
感謝@從菜鳥開始

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章