欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

Deep Belief Networks资料汇总 博客分类: 神经网络 Deep Learning神经网络Deep Belief Networks 

程序员文章站 2024-03-14 16:00:40
...

毕设做的是DBNs的相关研究,翻过一些资料,在此做个汇总。

可以通过谷歌学术搜索来下载这些论文。


Arel, I., Rose, D. C. and K arnowski, T. P. Deep machine learning - a new frontier in artificial intelligence research. Computational Intelligence Magazine, IEEE, vol. 5, pp. 13-18, 2010.

深度学习的介绍性文章,可做入门材料。


Bengio, Y. Learning deep architecture for AI. Foundations and Trends in Machine Learning, vol. 2, pp: 1-127, 2009.

深度学习的经典论文,集大成者。可以当作深度学习的学习材料。


Hinton, G. E. Learning multiple layers of representation. Trends in Cognitive Sciences, vol. 11, pp. 428-434, 2007.

不需要太多数学知识即可掌握DBNs的关键算法。这篇论文语言浅白,篇幅短小,适合初学者理解DBNs。


Hinton, G. E. To recognize shapes, first learn to generate images. Technical Report UTML TR 2006-003, University of Toronto, 2006.

多伦多大学的内部讲义。推荐阅读。


Hinton, G. E., Osindero, S. and Teh, Y. W. A fast learning algorithm for deep belief nets. Neural Computation, vol 18, pp. 1527-1554, 2006.

DBNs的开山之作,意义非凡,一定要好好看几遍。在这篇论文中,作者详细阐述了DBNs的方方面面,论证了其和一组层叠的RBMs的等价性,然后引出DBNs的学习算法。


Hinton, G. E. and Salakhutdinov, R. R. Reducing the dimensionality of data with neural networks. Science, vol. 313, no. 5786, pp. 504–507, 2006.

Science上的大作。这篇论文可是算作一个里程碑,它标志着深度学习总算有了高效的可行的算法。


Hinton, G. E. A practical guide to training restricted boltzmann machines. Technical Report UTML TR 2010-003, University of Toronto, 2010.

一份训练RBM的最佳实践。


Erhan, D., Manzagol, P. A., Bengio, Y., Bengio, S. and Vincent, P. The difficulty of training deep architectures and the effect of unsupervised pretraining. In The Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 153–160, 2009.


Erhan, D., Courville, A., Bengio, Y. and Vincent, P. Why Does Unsupervised Pre-training Help Deep Learning? In the 13th International Conference on Artificial Intelligence and Statistics (AISTATS), Chia Laguna Resort, Sardinia, Italy, 2010.

阐述了非监督预训练的作用。这两篇可以结合起来一起看。


这篇博客给出的材料更加全面,作者来自复旦大学,现似乎是在Yahoo Labs北京研究院工作。

http://demonstrate.ycool.com/post.3006074.html