Deep Learning in NLP (一)詞向量和語言模型
http://licstar.net/archives/328Deep Learning方向的paper整理
http://www.douban.com/note/382064119/
【1】 word2vec Project Home
第一手的資料,代碼:http://word2vec.googlecode.com/svn/trunk/,Papers:
[1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient Estimation of Word Representations in Vector Space . In Proceedings of Workshop at ICLR, 2013.
[2] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed Representations of Words and Phrases and their Compositionality . In Proceedings of NIPS, 2013.
[3] Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. Linguistic Regularities in Continuous Space Word Representations . In Proceedings of NAACL HLT, 2013.
【2】 Deep Learning in NLP (一)詞向量和語言模型
licstar 的經典之作,講解了主要的NN相關語言模型。
【3】 Deep Learning實戰之word2vec
有道幾個人寫的word2vec的解析文檔,從基本的詞向量/統計語言模型->NNLM->Log-Linear/Log-Bilinear->層次化Log-Bilinear,到CBOW和Skip-gram模型,再到word2vec的各種tricks,公式推導與代碼齊飛,基本上是網上關於word2vec資料的大合集啦,對word2vec感興趣的童鞋可以看下。(@王曉偉alex)
word2vec傻瓜剖析
http://xiaoquanzi.net/?p=1561
Word2vec在事件挖掘中的調研
session 應用
http://blog.csdn.net/shuishiman/article/details/20769437