深度學習--bolzmann machine

bm可以看做是hopfield的一個特例。rbm又是bm的一個特例。下面的代碼,看了很久才恍然大悟,好還前面看過bm的理論文章。

def sample_h_given_v(self, v0_sample):

''' This function infers state of hidden units given visible units '''

        # compute the activation of the hidden units given a sample of the visibles

        pre_sigmoid_h1, h1_mean = self.propup(v0_sample)

        # get a sample of the hiddens given their activation

        # Note that theano_rng.binomial returns a symbolic sample of dtype

        # int64 by default. If we want to keep our computations in floatX

        # for the GPU we need to specify to return the dtype floatX

        h1_sample = self.theano_rng.binomial(size = h1_mean.shape, n = 1, p = h1_mean,

                dtype = theano.config.floatX)

        return [pre_sigmoid_h1, h1_mean, h1_sample]

 

這裏對h1_sample採用2項分佈,根據概率p做採用,是融合了模擬退火的思想的。

大家可以用sigmoid參數一個近似的概率[0,1],然後用binomial做個簡單實驗。

def propup(self, vis):

        ''' This function propagates the visible units activation upwards to

        the hidden units

 

        Note that we return also the pre-sigmoid activation of the layer. As

        it will turn out later, due to how Theano deals with optimizations,

        this symbolic variable will be needed to write down a more

        stable computational graph (see details in the reconstruction cost function)

        '''

        pre_sigmoid_activation = T.dot(vis, self.W) + self.hbias

        return [pre_sigmoid_activation, T.nnet.sigmoid(pre_sigmoid_activation)]

 

這個函數和上面的函數一起實現Gibbs 採樣。不過文章中對一個樣本各個分量的採樣不是一起做的。這裏用wx+b來一起實現。還得再看下文章。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章