共軛梯度下降法主要用於解線性方程組和二次優化問題
A-共軛的定義及其性質
第一條性質利用線性無關的定義很容易得到,第二條性質也就是如下的定理
ppt中的可以由表達式計算得到,定理的證明如下:
n個基的確定方法,由這種方法得到的各個基兩兩共軛
CG算法描述1:
一些性質
利用這些性質可以推出CG算法一個等價描述:
迭代次數最大不超過A的無重複特徵值個數
利用這個定理可以改進CG算法得到PCG算法
PCG算法描述
Homework
PCG算法實戰(homework)
import numpy as np
MAX = 1000
PRECISION = 1e-6
class Function:
def __init__(self,mat,vec):
self.mat = mat
self.vec = vec
self.dim = len(mat)
def grid(self,x):
return np.dot(self.mat,x) - self.vec
def __call__(self, x):
return (1/2)*x.T.dot(self.mat).dot(x) - self.vec.T.dot(x)
def min(f):
x = np.zeros((f.dim,1))
r = f.grid(x)
y = (1/10)*r
p = -y
number = 0
while number<MAX:
a = (r.T.dot(y))/(p.T.dot(f.mat).dot(p))
x1 = x + a*p
r1 = f.grid(x1)
y1 = (1/10)*r1
b = (r1.T.dot(y1))/(r.T.dot(y))
p = -y1 + b*p
if abs(f(x1) - f(x)) < PRECISION:
break
else:
x, y, r =x1, y1, r1
number += 1
return number,f(x)
def create_coefficient(dim):
b = np.ones((dim,1))
A = np.array([[1/(i+j+1) for j in range(dim)] for i in range(dim)])
return A,b
if __name__ =='__main__':
A, b = create_coefficient(5)
print(min(Function(A, b)))
A, b = create_coefficient(8)
print(min(Function(A, b)))
A, b = create_coefficient(12)
print(min(Function(A, b)))
A, b = create_coefficient(20)
print(min(Function(A, b)))
計算結果
(5, array([[-12.49999988]]))
(19, array([[-31.99999995]]))
(20, array([[-48.44638437]]))
(11, array([[-59.08143301]]))