Inference in Deep Learning
There are many, many new generative methods developed in the recent years.
- denoising autoencoders
- generative stochastic networks
- variational autoencoders
- importance weighted autoencoders
- generative adversarial networks
- infusion training
- variational walkback
- stacked generative adversarial networks
- generative latent optimization
- deep learning through the use of non-equilibrium thermodynamics
Deep Models
We can’t delve into the details of those old workhorse models, but let us summarize a few of them nevertheless.
A Boltzmann machine can be seen as a stochastic generalization of a Hopfield network. In their unrestricted form Hebbian learning is often used to learn representations.
Don't just read the excerpt. :-) Sit down and read for real! →