自動微分的使用

之前的一個優化算法,因爲要計算梯度和hessian矩陣,從性能考慮,是通過手算出解析表達式然後寫成代碼放進去。後來發現有自動微分算法可以自動化這一過程,遂研究了一下。

 

自動微分的理論介紹可參考如下鏈接:

https://blog.csdn.net/qq_38640439/article/details/81674466

https://www.cnblogs.com/wxshi/p/8007106.html

其本質上,前向微分是利用類似複數的運算規則對基本初等函數進行變換,從而在計算中同時得到0階和1階結果(函數值和一階微分值),這樣最後遍歷整個表達式樹的時候就可以同時獲得函數值和一階微分的結果;而反向微分則是記錄每個基本運算節點的函數值和微分值,然後利用鏈式法則遍歷整個表達式樹,從而給出函數值和微分值。核心是要對active變量的運算符進行重載。

 

自動微分有個專門的網站收集相關論文和工具包,其網址如下:

http://www.autodiff.org/

因爲之前用的是c++,於是大致瀏覽了一遍,基本鎖定在下面幾個庫。

adept

http://www.met.rdg.ac.uk/clouds/adept/

adol-c

https://github.com/coin-or/ADOL-C

autodiff

https://autodiff.github.io/

CoDiPack

https://www.scicomp.uni-kl.de/software/codi/

cppad

https://coin-or.github.io/CppAD/doc/cppad.htm

FastAD

https://github.com/JamesYang007/FastAD

 

之所以挑出這幾個庫,一是看是否有更新,有些庫很久未更新的就被pass掉了;二是看依賴,如果有太多依賴,編譯起來比較麻煩的也被pass掉了。還有些主頁登不上去的,也就被我當作不存在。總之最後挑出這幾個感覺還算比較符合要求,於是逐一開始細看。

 

研究了一番,發現其實自動微分功能各家都實現得不錯,差別主要在於與已有代碼的交互,以及文檔寫得是否詳細,例子是否夠多。其中adept, CoDiPack和autodiff這三個庫是純cpp頭文件,不需要編譯直接在工程中include即可使用;而另外三個adol-c, cppad, FastAD則需要編譯,也不算難。

但是接下來的問題就比較麻煩,我的程序中用到BLAS來計算矩陣乘法,而自動微分的基本思路是把原始函數中的自變量用庫定義的變量代替,稱爲active變量。而這些變量是不能直接用到BLAS庫中的。因此是否實現active變量的相關調用就成了重點。當然原則上也可以自己寫這部分,不過工作量會大一些。

逐一測試了一番,

1. adept實現了矩陣運算,但問題是隻有一階導數,二階hessian可以用一階jacobian來算,但是不支持非線性函數;

2. CoDiPack則未實現矩陣運算,但可以做高階導數;

3. autodiff可以利用Eigen實現矩陣運算,也可以實現高階導數;

4. adol-c雖然文檔比較全,樣例也比較多,支持高階導數,還支持CUDA和OpenMP,但是首先依賴於boost和colpack太重,其次我也沒找到矩陣運算的接口,只有一個幫助實現外部函數微分的方法,因此也沒有仔細深入研究;

5. cppad在example中實現了矩陣運算,也可以做高階導數,但是文檔比較少,雖然樣例挺多,一個一個看源代碼效率實在不高;

6. FastAD類似autodiff,也是利用Eigen實現了矩陣運算,但不確定有沒有高階導數,文檔也比較少,因此也未深入研究。

綜合以上結果,決定選用autodiff作爲最終使用的庫。但還有一個點需要注意,autodiff是c++17的庫,而我採用的是vs2017,也不知道是否支持。於是又研究了一番,發現可以在工程屬性-c/c++-語言欄設置c++語言標準,試了一下倒也能成功編譯,終於所有障礙都沒有了。

寫了個測試樣例如下:

test.cpp

#include "mkl.h"
// autodiff include
#include <autodiff/reverse/var.hpp>
#include <autodiff/reverse/var/eigen.hpp>

Eigen::MatrixXd m_ker_ad;
double* m_ker;
double* m_tt, *m_bs;
const int m_row = 8, m_col = 2;
const double m_step = 1e-3;

double m_fval;
Eigen::VectorXd m_gradient;
Eigen::MatrixXd m_hessian;

autodiff::var calc_y(const autodiff::ArrayXvar& x)
{
	int i;
	autodiff::var Aexp = x[0];
	autodiff::var Texp = x[1];
	autodiff::var Acos = x[2];
	autodiff::var Tcos = x[3];

	autodiff::VectorXvar fkxBs(m_row);  // fv*k*xB - bs
	autodiff::VectorXvar xB = x(Eigen::seq(4,Eigen::last));

	// calc fkxBs = k*xB
	fkxBs = m_ker_ad * xB;

	for (i = 0; i < m_row; i++)
	{
		autodiff::var fv = 1 - Aexp * (1 - exp(-m_tt[i] / Texp)) - Acos * (1 - cos(2 * 3.14 * m_tt[i] / Tcos));
		fkxBs(i) = fkxBs(i) * fv - m_bs[i];
	}

	autodiff::var tmp1 = fkxBs.dot(fkxBs);

	return tmp1;
}

bool hessian(int n, const double* x, double* hess)
{
	autodiff::VectorXvar xx(n);
	for (int i = 0; i < n; i++)
	{
		xx(i) = x[i];
	}

	autodiff::var u = calc_y(xx);  // the output variable u

	Eigen::MatrixXd m_hessian = autodiff::hessian(u, xx, m_gradient);  // evaluate the Hessian matrix H and the gradient vector g of u

	for (int i = 0; i < n; i++)
	{
		for (int j = 0; j <= i; j++)
		{
			hess[i*n + j] = m_hessian(i, j);
			hess[j * n + i] = hess[i * n + j];
		}
	}

	return true;
}

double eval(int n, const double* x)
{
	int i;
	double Aexp = x[0];
	double Texp = x[1];
	double Acos = x[2];
	double Tcos = x[3];
	double* fv = new double[m_row];
	double* fkxBs = new double[m_row];  // fv*k*xB - bs
	double* xB = new double[m_col];

	memset(fkxBs, 0, sizeof(double) * m_row);
	memcpy(xB, &x[4], sizeof(double) * m_col);

	// calc fkxBs = k*xB
	cblas_dgemv(CblasColMajor, CblasNoTrans, m_row, m_col,
		1, m_ker, m_row, xB, 1, 0, fkxBs, 1);

	for (i = 0; i < m_row; i++)
	{
		fv[i] = 1 - Aexp * (1 - exp(-m_tt[i] / Texp)) - Acos * (1 - cos(2 * 3.14 * m_tt[i] / Tcos));
		fkxBs[i] = fkxBs[i] * fv[i] - m_bs[i];
	}

	double tmp1 = cblas_ddot(m_row, fkxBs, 1, fkxBs, 1);

	delete[] fv;
	delete[] fkxBs;
	delete[] xB;

	return tmp1;
}

bool eval_grad(int n, const double* x, double* grad_f, double* fi)
{
	double* xcopy = new double[n];
	double fval, h = m_step;
	int i, j;

	fval = eval(n, x);
	for (i = 0; i < n; i++)
	{
		memcpy(xcopy, x, sizeof(double) * n);
		xcopy[i] = xcopy[i] + h;
		fi[i] = eval(n, xcopy);
		grad_f[i] = (fi[i] - fval) / h;
		//printf("Norm: grad[%d] = %g\n", i, grad_f[i]);
	}

	delete[] xcopy;
	return true;
}

void hess_cpu(int n, const double* x, double* hess)
{
	int i, j, k;
	double h = m_step, fval;
	double* grad_f = new double[n];
	double* fi = new double[n];
	double* fij = new double[n * n];

	// before process, need to calc the following
	// calc f
	// calc fi = f(x[i] + h)
	// calc fij = f(xi+h, xj+h)
	fval = eval(n, x);
	eval_grad(n, x, grad_f, fi);
	double* xcopy = new double[n];
	memcpy(xcopy, x, sizeof(double) * n);

	for (i = 0; i < n; i++)
	{
		for (j = 0; j <= i; j++)
		{
			xcopy[i] += h;
			xcopy[j] += h;
			// calc fij = f(xi+h, xj+h)
			fij[i * n + j] = eval(n, xcopy);
			fij[j * n + i] = fij[i * n + j];

			hess[i * n + j] = (fij[i * n + j] - fi[i] - fi[j] + fval) / (h * h);
			hess[j * n + i] = hess[i * n + j];

			xcopy[i] -= h;
			xcopy[j] -= h;
		}
	}

	delete[] xcopy;
	delete[] grad_f;
	delete[] fi;
	delete[] fij;
}

void init(int n, double* x)
{
	m_ker_ad.resize(m_row, m_col);

	m_tt[0] = 1.2;
	m_tt[1] = 2.4;
	m_tt[2] = 3.6;
	m_tt[3] = 4.8;
	m_tt[4] = 6;
	m_tt[5] = 7.2;
	m_tt[6] = 8.4;
	m_tt[7] = 9.6;

	m_bs[0] = 34.1609274827693;
	m_bs[1] = 31.7585604056163;
	m_bs[2] = 32.5300696224329;
	m_bs[3] = 32.0848501052102;
	m_bs[4] = 29.9925225741006;
	m_bs[5] = 29.2151278209712;
	m_bs[6] = 27.8226316386016;
	m_bs[7] = 30.3494569031564;

	x[0] = 0.025;
	x[1] = 0.99;
	x[2] = 0.025;
	x[3] = 495.99;
	x[4] = 0.99;
	x[5] = 0.99;


	m_ker[0 + m_row * 0] = 0.0324332408947955; m_ker[0 + m_row * 1] = 0.0900929912460324;
	m_ker[1 + m_row * 0] = 0.00105191511493984; m_ker[1 + m_row * 1] = 0.00811674707165767;
	m_ker[2 + m_row * 0] = 3.41170163237203e-05; m_ker[2 + m_row * 1] = 0.000731262022873115;
	m_ker[3 + m_row * 0] = 1.10652540903889e-06; m_ker[3 + m_row * 1] = 6.58815830252634e-05;
	m_ker[4 + m_row * 0] = 3.58882051475705e-08; m_ker[4 + m_row * 1] = 5.93546888276981e-06;
	m_ker[5 + m_row * 0] = 1.16397080283299e-09; m_ker[5 + m_row * 1] = 5.34744146096480e-07;
	m_ker[6 + m_row * 0] = 3.77513454427908e-11; m_ker[6 + m_row * 1] = 4.81766996731371e-08;
	m_ker[7 + m_row * 0] = 1.22439848084868e-12; m_ker[7 + m_row * 1] = 4.34038298191467e-09;
	for (int i = 0; i < m_col; i++)
	{
		for (int j = 0; j < m_row; j++)
		{
			m_ker_ad(j, i) = m_ker[i * m_row + j];
		}
	}
}

int main()
{
	int num = 6;
	double* x = new double[num];
	double* hess1 = new double[num * num];
	double* hess2 = new double[num * num];
	m_tt = new double[m_row];
	m_bs = new double[m_row];
	m_ker = new double[m_row * m_col];

	init(num, x);
	bool flag = hessian(num, x, hess1);
	hess_cpu(num, x, hess2);

	for (int i = 0; i < num; i++)
	{
		for (int j = 0; j <= i; j++)
		{
			printf("hess1[%d][%d] = %g, hess2[%d][%d] = %g\n", i, j, hess1[i * num + j], i, j, hess2[i * num + j]);
		}
	}

	delete[] x;
	delete[] hess1;
	delete[] hess2;
	delete[] m_tt;
	delete[] m_bs;
	delete[] m_ker;
	return 1;
}

運行結果如下:

可以看到,在計算精度範圍內,兩個計算結果是能夠符合的。

 

完整的工程可以在如下地址下載,注意需要安裝intel mkl才能成功編譯運行。

https://download.csdn.net/download/u014559935/69494288

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章