自动微分的使用

之前的一个优化算法,因为要计算梯度和hessian矩阵,从性能考虑,是通过手算出解析表达式然后写成代码放进去。后来发现有自动微分算法可以自动化这一过程,遂研究了一下。

 

自动微分的理论介绍可参考如下链接:

https://blog.csdn.net/qq_38640439/article/details/81674466

https://www.cnblogs.com/wxshi/p/8007106.html

其本质上,前向微分是利用类似复数的运算规则对基本初等函数进行变换,从而在计算中同时得到0阶和1阶结果(函数值和一阶微分值),这样最后遍历整个表达式树的时候就可以同时获得函数值和一阶微分的结果;而反向微分则是记录每个基本运算节点的函数值和微分值,然后利用链式法则遍历整个表达式树,从而给出函数值和微分值。核心是要对active变量的运算符进行重载。

 

自动微分有个专门的网站收集相关论文和工具包,其网址如下:

http://www.autodiff.org/

因为之前用的是c++,于是大致浏览了一遍,基本锁定在下面几个库。

adept

http://www.met.rdg.ac.uk/clouds/adept/

adol-c

https://github.com/coin-or/ADOL-C

autodiff

https://autodiff.github.io/

CoDiPack

https://www.scicomp.uni-kl.de/software/codi/

cppad

https://coin-or.github.io/CppAD/doc/cppad.htm

FastAD

https://github.com/JamesYang007/FastAD

 

之所以挑出这几个库,一是看是否有更新,有些库很久未更新的就被pass掉了;二是看依赖,如果有太多依赖,编译起来比较麻烦的也被pass掉了。还有些主页登不上去的,也就被我当作不存在。总之最后挑出这几个感觉还算比较符合要求,于是逐一开始细看。

 

研究了一番,发现其实自动微分功能各家都实现得不错,差别主要在于与已有代码的交互,以及文档写得是否详细,例子是否够多。其中adept, CoDiPack和autodiff这三个库是纯cpp头文件,不需要编译直接在工程中include即可使用;而另外三个adol-c, cppad, FastAD则需要编译,也不算难。

但是接下来的问题就比较麻烦,我的程序中用到BLAS来计算矩阵乘法,而自动微分的基本思路是把原始函数中的自变量用库定义的变量代替,称为active变量。而这些变量是不能直接用到BLAS库中的。因此是否实现active变量的相关调用就成了重点。当然原则上也可以自己写这部分,不过工作量会大一些。

逐一测试了一番,

1. adept实现了矩阵运算,但问题是只有一阶导数,二阶hessian可以用一阶jacobian来算,但是不支持非线性函数;

2. CoDiPack则未实现矩阵运算,但可以做高阶导数;

3. autodiff可以利用Eigen实现矩阵运算,也可以实现高阶导数;

4. adol-c虽然文档比较全,样例也比较多,支持高阶导数,还支持CUDA和OpenMP,但是首先依赖于boost和colpack太重,其次我也没找到矩阵运算的接口,只有一个帮助实现外部函数微分的方法,因此也没有仔细深入研究;

5. cppad在example中实现了矩阵运算,也可以做高阶导数,但是文档比较少,虽然样例挺多,一个一个看源代码效率实在不高;

6. FastAD类似autodiff,也是利用Eigen实现了矩阵运算,但不确定有没有高阶导数,文档也比较少,因此也未深入研究。

综合以上结果,决定选用autodiff作为最终使用的库。但还有一个点需要注意,autodiff是c++17的库,而我采用的是vs2017,也不知道是否支持。于是又研究了一番,发现可以在工程属性-c/c++-语言栏设置c++语言标准,试了一下倒也能成功编译,终于所有障碍都没有了。

写了个测试样例如下:

test.cpp

#include "mkl.h"
// autodiff include
#include <autodiff/reverse/var.hpp>
#include <autodiff/reverse/var/eigen.hpp>

Eigen::MatrixXd m_ker_ad;
double* m_ker;
double* m_tt, *m_bs;
const int m_row = 8, m_col = 2;
const double m_step = 1e-3;

double m_fval;
Eigen::VectorXd m_gradient;
Eigen::MatrixXd m_hessian;

autodiff::var calc_y(const autodiff::ArrayXvar& x)
{
	int i;
	autodiff::var Aexp = x[0];
	autodiff::var Texp = x[1];
	autodiff::var Acos = x[2];
	autodiff::var Tcos = x[3];

	autodiff::VectorXvar fkxBs(m_row);  // fv*k*xB - bs
	autodiff::VectorXvar xB = x(Eigen::seq(4,Eigen::last));

	// calc fkxBs = k*xB
	fkxBs = m_ker_ad * xB;

	for (i = 0; i < m_row; i++)
	{
		autodiff::var fv = 1 - Aexp * (1 - exp(-m_tt[i] / Texp)) - Acos * (1 - cos(2 * 3.14 * m_tt[i] / Tcos));
		fkxBs(i) = fkxBs(i) * fv - m_bs[i];
	}

	autodiff::var tmp1 = fkxBs.dot(fkxBs);

	return tmp1;
}

bool hessian(int n, const double* x, double* hess)
{
	autodiff::VectorXvar xx(n);
	for (int i = 0; i < n; i++)
	{
		xx(i) = x[i];
	}

	autodiff::var u = calc_y(xx);  // the output variable u

	Eigen::MatrixXd m_hessian = autodiff::hessian(u, xx, m_gradient);  // evaluate the Hessian matrix H and the gradient vector g of u

	for (int i = 0; i < n; i++)
	{
		for (int j = 0; j <= i; j++)
		{
			hess[i*n + j] = m_hessian(i, j);
			hess[j * n + i] = hess[i * n + j];
		}
	}

	return true;
}

double eval(int n, const double* x)
{
	int i;
	double Aexp = x[0];
	double Texp = x[1];
	double Acos = x[2];
	double Tcos = x[3];
	double* fv = new double[m_row];
	double* fkxBs = new double[m_row];  // fv*k*xB - bs
	double* xB = new double[m_col];

	memset(fkxBs, 0, sizeof(double) * m_row);
	memcpy(xB, &x[4], sizeof(double) * m_col);

	// calc fkxBs = k*xB
	cblas_dgemv(CblasColMajor, CblasNoTrans, m_row, m_col,
		1, m_ker, m_row, xB, 1, 0, fkxBs, 1);

	for (i = 0; i < m_row; i++)
	{
		fv[i] = 1 - Aexp * (1 - exp(-m_tt[i] / Texp)) - Acos * (1 - cos(2 * 3.14 * m_tt[i] / Tcos));
		fkxBs[i] = fkxBs[i] * fv[i] - m_bs[i];
	}

	double tmp1 = cblas_ddot(m_row, fkxBs, 1, fkxBs, 1);

	delete[] fv;
	delete[] fkxBs;
	delete[] xB;

	return tmp1;
}

bool eval_grad(int n, const double* x, double* grad_f, double* fi)
{
	double* xcopy = new double[n];
	double fval, h = m_step;
	int i, j;

	fval = eval(n, x);
	for (i = 0; i < n; i++)
	{
		memcpy(xcopy, x, sizeof(double) * n);
		xcopy[i] = xcopy[i] + h;
		fi[i] = eval(n, xcopy);
		grad_f[i] = (fi[i] - fval) / h;
		//printf("Norm: grad[%d] = %g\n", i, grad_f[i]);
	}

	delete[] xcopy;
	return true;
}

void hess_cpu(int n, const double* x, double* hess)
{
	int i, j, k;
	double h = m_step, fval;
	double* grad_f = new double[n];
	double* fi = new double[n];
	double* fij = new double[n * n];

	// before process, need to calc the following
	// calc f
	// calc fi = f(x[i] + h)
	// calc fij = f(xi+h, xj+h)
	fval = eval(n, x);
	eval_grad(n, x, grad_f, fi);
	double* xcopy = new double[n];
	memcpy(xcopy, x, sizeof(double) * n);

	for (i = 0; i < n; i++)
	{
		for (j = 0; j <= i; j++)
		{
			xcopy[i] += h;
			xcopy[j] += h;
			// calc fij = f(xi+h, xj+h)
			fij[i * n + j] = eval(n, xcopy);
			fij[j * n + i] = fij[i * n + j];

			hess[i * n + j] = (fij[i * n + j] - fi[i] - fi[j] + fval) / (h * h);
			hess[j * n + i] = hess[i * n + j];

			xcopy[i] -= h;
			xcopy[j] -= h;
		}
	}

	delete[] xcopy;
	delete[] grad_f;
	delete[] fi;
	delete[] fij;
}

void init(int n, double* x)
{
	m_ker_ad.resize(m_row, m_col);

	m_tt[0] = 1.2;
	m_tt[1] = 2.4;
	m_tt[2] = 3.6;
	m_tt[3] = 4.8;
	m_tt[4] = 6;
	m_tt[5] = 7.2;
	m_tt[6] = 8.4;
	m_tt[7] = 9.6;

	m_bs[0] = 34.1609274827693;
	m_bs[1] = 31.7585604056163;
	m_bs[2] = 32.5300696224329;
	m_bs[3] = 32.0848501052102;
	m_bs[4] = 29.9925225741006;
	m_bs[5] = 29.2151278209712;
	m_bs[6] = 27.8226316386016;
	m_bs[7] = 30.3494569031564;

	x[0] = 0.025;
	x[1] = 0.99;
	x[2] = 0.025;
	x[3] = 495.99;
	x[4] = 0.99;
	x[5] = 0.99;


	m_ker[0 + m_row * 0] = 0.0324332408947955; m_ker[0 + m_row * 1] = 0.0900929912460324;
	m_ker[1 + m_row * 0] = 0.00105191511493984; m_ker[1 + m_row * 1] = 0.00811674707165767;
	m_ker[2 + m_row * 0] = 3.41170163237203e-05; m_ker[2 + m_row * 1] = 0.000731262022873115;
	m_ker[3 + m_row * 0] = 1.10652540903889e-06; m_ker[3 + m_row * 1] = 6.58815830252634e-05;
	m_ker[4 + m_row * 0] = 3.58882051475705e-08; m_ker[4 + m_row * 1] = 5.93546888276981e-06;
	m_ker[5 + m_row * 0] = 1.16397080283299e-09; m_ker[5 + m_row * 1] = 5.34744146096480e-07;
	m_ker[6 + m_row * 0] = 3.77513454427908e-11; m_ker[6 + m_row * 1] = 4.81766996731371e-08;
	m_ker[7 + m_row * 0] = 1.22439848084868e-12; m_ker[7 + m_row * 1] = 4.34038298191467e-09;
	for (int i = 0; i < m_col; i++)
	{
		for (int j = 0; j < m_row; j++)
		{
			m_ker_ad(j, i) = m_ker[i * m_row + j];
		}
	}
}

int main()
{
	int num = 6;
	double* x = new double[num];
	double* hess1 = new double[num * num];
	double* hess2 = new double[num * num];
	m_tt = new double[m_row];
	m_bs = new double[m_row];
	m_ker = new double[m_row * m_col];

	init(num, x);
	bool flag = hessian(num, x, hess1);
	hess_cpu(num, x, hess2);

	for (int i = 0; i < num; i++)
	{
		for (int j = 0; j <= i; j++)
		{
			printf("hess1[%d][%d] = %g, hess2[%d][%d] = %g\n", i, j, hess1[i * num + j], i, j, hess2[i * num + j]);
		}
	}

	delete[] x;
	delete[] hess1;
	delete[] hess2;
	delete[] m_tt;
	delete[] m_bs;
	delete[] m_ker;
	return 1;
}

运行结果如下:

可以看到,在计算精度范围内,两个计算结果是能够符合的。

 

完整的工程可以在如下地址下载,注意需要安装intel mkl才能成功编译运行。

https://download.csdn.net/download/u014559935/69494288

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章