1.1 introduction
notation | meaning | remark |
---|---|---|
m | number of training examples | |
x’s | input variable / features | |
y’s | output variable / target variable | |
(x^(i), y^(i)) | 第 i 個 training example | |
h | hypothesis function, h maps from x’s to y’s | 假設函數由學習算法輸出 |
:= | 賦值運算符 | |
= | 判定/斷言 | eg: a = b , 即斷言a與b相等 |
alpha | learning rate |
1.2 model and cost function
- cost function:預測值與真實值之間的差值;
1.3 parameter learning
- gradient descent:參數 theta0、theta1 初始化爲0,不斷調整參數值,使代價函數J(theta0, theta1)達到最小值;
theta0、theta1 的值需要同時更新;