Stochastic models | Neural network architectures

Restricted Boltzmann machine

A restricted Boltzmann machine (RBM) is a generative stochastic artificial neural network that can learn a probability distribution over its set of inputs. RBMs were initially invented under the name Harmonium by Paul Smolensky in 1986,and rose to prominence after Geoffrey Hinton and collaborators invented fast learning algorithms for them in the mid-2000. RBMs have found applications in dimensionality reduction,classification,collaborative filtering, feature learning,topic modellingand even many body quantum mechanics. They can be trained in either supervised or unsupervised ways, depending on the task. As their name implies, RBMs are a variant of Boltzmann machines, with the restriction that their neurons must form a bipartite graph: a pair of nodes from each of the two groups of units (commonly referred to as the "visible" and "hidden" units respectively) may have a symmetric connection between them; and there are no connections between nodes within a group. By contrast, "unrestricted" Boltzmann machines may have connections between hidden units. This restriction allows for more efficient training algorithms than are available for the general class of Boltzmann machines, in particular the gradient-based contrastive divergence algorithm. Restricted Boltzmann machines can also be used in deep learning networks. In particular, deep belief networks can be formed by "stacking" RBMs and optionally fine-tuning the resulting deep network with gradient descent and backpropagation. (Wikipedia).

Restricted Boltzmann machine
Video thumbnail

Lecture 12C : Restricted Boltzmann Machines

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] Lecture 12C : Restricted Boltzmann Machines

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Restricted Boltzmann Machine | Neural Network Tutorial | Deep Learning Tutorial | Edureka

** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learning-with-tensorflow ** This Edureka video on "Restricted Boltzmann Machine" will provide you with a detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM. You will also

From playlist Deep Learning With TensorFlow Videos

Video thumbnail

Lecture 12/16 : Restricted Boltzmann machines (RBMs)

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] 12A The Boltzmann Machine learning algorithm 12B More efficient ways to get the statistics 12C Restricted Boltzmann Machines 12D An example of Contrastive Divergence Learning 12E RBMs for collaborative filtering

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Lecture 12.3 — Restricted Boltzmann Machines [Neural Networks for Machine Learning]

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. Link to the course (login required): https://class.coursera.org/neuralnets-2012-001

From playlist [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton

Video thumbnail

Deep Learning Lecture 10.3 - Restricted Boltzmann Machines

Restricted Boltzmann Machines: - Architecture - Energy - Gibbs Sampling and Contrastive Divergence

From playlist Deep Learning Lecture

Video thumbnail

Lecture 14A : Learning layers of features by stacking RBMs

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] Lecture 14A : Learning layers of features by stacking RBMs

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Lecture 14.1 — Learning layers of features by stacking RBMs [Neural Networks for Machine Learning]

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. Link to the course (login required): https://class.coursera.org/neuralnets-2012-001

From playlist [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton

Video thumbnail

Lecture 14/16 : Deep neural nets with generative pre-training

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] 14A Learning layers of features by stacking RBMs 14B Discriminative fine-tuning for DBNs 14C What happens during discriminative fine-tuning? 14D Modeling real-valued data with an RBM 14E RBMs are Infinite Sigmoid Beli

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Lecture 16A : Learning a joint model of images and captions

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] Lecture 16A : Learning a joint model of images and captions

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Lecture 16.1 — Learning a joint model of images and captions [Neural Networks for Machine Learning]

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. Link to the course (login required): https://class.coursera.org/neuralnets-2012-001

From playlist [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton

Video thumbnail

Lecture 12A : The Boltzmann Machine learning algorithm

Neural Networks for Machine Learning by Geoffrey Hinton [Coursera 2013] Lecture 12A : The Boltzmann Machine learning algorithm

From playlist Neural Networks for Machine Learning by Professor Geoffrey Hinton [Complete]

Video thumbnail

Lecture 14.5 — RBMs are infinite sigmoid belief nets [Neural Networks for Machine Learning]

Lecture from the course Neural Networks for Machine Learning, as taught by Geoffrey Hinton (University of Toronto) on Coursera in 2012. Link to the course (login required): https://class.coursera.org/neuralnets-2012-001

From playlist [Coursera] Neural Networks for Machine Learning — Geoffrey Hinton

Related pages

Dimensionality reduction | Boltzmann machine | Deep learning | Logistic function | Outer product | Generative model | Hopfield network | Graphical model | Factor analysis | Marginal distribution | Statistical classification | Markov random field | Bipartite graph | Joint probability distribution | Conditional probability | Multinomial distribution | Probability distribution | Gradient descent | Helmholtz machine | Log probability | Artificial neural network | Topic model | Gibbs sampling | Autoencoder | Conditional independence | Expected value | Normalizing constant | Deeplearning4j | Backpropagation | Boolean algebra | Partition function (mathematics) | Softmax function | Matrix (mathematics) | Bernoulli distribution | Deep belief network