最近在看计算机视觉:模型学习与推理。第四章使用最大似然方法学习分类分布概率参数。
Pr ( x = k ∣ λ 1 … K ) = λ k \operatorname{Pr}\left(x=k | \lambda_{1 \ldots K}\right)=\lambda_{k} P r ( x = k ∣ λ 1 … K ) = λ k
我这里使用C++标准库的Poisson分布生成数据,然后用最大似然方法去估计分布参数。
这里设置的Poisson分布的超参数是4
代码如下:
vector< int > generate_categorical_distribution_data ( int number)
{
vector< int > data;
std:: random_device rd{ } ;
std:: mt19937 gen{ rd ( ) } ;
std:: poisson_distribution< > d ( 4 ) ;
for ( int i = 0 ; i < number; i++ )
{
data. push_back ( d ( gen) ) ;
}
return data;
}
λ ^ 1 … k = argmax λ 1 … k [ ∏ i = 1 I Pr ( x i ∣ λ 1 … k ) ] s.t. ∑ k λ k = 1 = argmax λ 1 … , k [ ∏ i = 1 I Cat x i [ λ 1 … k ] ] s.t. ∑ k λ k = 1 = argmax λ 1 … , k [ ∏ k = 1 k λ k N k ] s.t. ∑ k λ k = 1 \begin{aligned} \hat{\lambda}_{1 \ldots k} &=\underset{\lambda_{1 \ldots k}}{\operatorname{argmax}}\left[\prod_{i=1}^{I} \operatorname{Pr}\left(x_{i} | \lambda_{1 \ldots k}\right)\right] & & \text { s.t. } \sum_{k} \lambda_{k}=1 \\ &=\underset{\lambda_{1} \ldots, k}{\operatorname{argmax}}\left[\prod_{i=1}^{I} \operatorname{Cat}_{x_{i}}\left[\lambda_{1 \ldots k}\right]\right] & & \text { s.t. } \sum_{k} \lambda_{k}=1 \\ &=\underset{\lambda_{1} \ldots, k}{\operatorname{argmax}}\left[\prod_{k=1}^{k} \lambda_{k}^{N_{k}}\right] & & \text { s.t. } \sum_{k} \lambda_{k}=1 \end{aligned} λ ^ 1 … k = λ 1 … k a r g m a x [ i = 1 ∏ I P r ( x i ∣ λ 1 … k ) ] = λ 1 … , k a r g m a x [ i = 1 ∏ I C a t x i [ λ 1 … k ] ] = λ 1 … , k a r g m a x [ k = 1 ∏ k λ k N k ] s.t. k ∑ λ k = 1 s.t. k ∑ λ k = 1 s.t. k ∑ λ k = 1
这里泊松分布产生的是整数从0开的的分布,在数据集中产生多少种数字就给出对应的概率。
推导过程还是使用最大似然估计的对数化求导数技巧:
L = ∑ k = 1 k N k log [ λ k ] + ν ( ∑ k = 1 k λ k − 1 ) L=\sum_{k=1}^{k} N_{k} \log \left[\lambda_{k}\right]+\nu\left(\sum_{k=1}^{k} \lambda_{k}-1\right) L = k = 1 ∑ k N k log [ λ k ] + ν ( k = 1 ∑ k λ k − 1 )
结果如下:
λ ^ k = N k ∑ m = 1 k N m \hat{\lambda}_{k}=\frac{N_{k}}{\sum_{m=1}^{k} N_{m}} λ ^ k = ∑ m = 1 k N m N k
算法流程如下:
Input : Multi-valued training data { x i } i = 1 I Output: ML estimate of categorical parameters θ = { λ 1 … λ k } begin for k = 1 to K do λ k = ∑ i = 1 I δ [ x i − k ] / I end \begin{array}{l}{\text { Input : Multi-valued training data }\left\{x_{i}\right\}_{i=1}^{I}} \\ {\text { Output: ML estimate of categorical parameters } \boldsymbol{\theta}=\left\{\lambda_{1} \ldots \lambda_{k}\right\}} \\ {\text { begin }} \\ {\text { for } k=1 \text { to } K \text { do }} \\ {\qquad \begin{array}{l}{\lambda_{k}=\sum_{i=1}^{I} \delta\left[\mathbf{x}_{i}-k\right] / I} \\ {\text { end }}\end{array}}\end{array} Input : Multi-valued training data { x i } i = 1 I Output: ML estimate of categorical parameters θ = { λ 1 … λ k } begin for k = 1 to K do λ k = ∑ i = 1 I δ [ x i − k ] / I end
这部分学习的代码如下:
void max_likelihood_categorical_distribution_parameters ( )
{
vector< int > data;
data = generate_categorical_distribution_data ( 1000 ) ;
std:: map< int , double > hist{ } ;
for ( int i = 0 ; i < data. size ( ) ; i++ )
{
++ hist[ data[ i] ] ;
}
double total_p = 0 ;
for ( int i = 0 ; i < hist. size ( ) ; i++ )
{
hist. at ( i) = hist. at ( i) / data. size ( ) ;
total_p + = hist. at ( i) ;
std:: cout << hist. at ( i) << std:: endl;
}
std:: cout << "total_p: " << total_p << std:: endl;
}
在map结构的hist中存储的就是数据的最大似然分布。