Inference in Deep Learning

There are many, many new generative methods developed in the recent years.

  • denoising autoencoders
  • generative stochastic networks
  • variational autoencoders
  • importance weighted autoencoders
  • generative adversarial networks
  • infusion training
  • variational walkback
  • stacked generative adversarial networks
  • generative latent optimization
  • deep learning through the use of non-equilibrium thermodynamics

Deep Models

We can’t delve into the details of those old workhorse models, but let us summarize a few of them nevertheless.

A Boltzmann machine can be seen as a stochastic generalization of a Hopfield network. In their unrestricted form Hebbian learning is often used to learn representations.

Don't just read the excerpt. :-) Sit down and read for real! →

What Is Contrastive Divergence?

Kullback-Leibler divergence

In contrastive divergence the Kullback-Leibler divergence (KL-divergence) between the data distribution and the model distribution is minimized (here we assume to be discrete):

Here is the observed data distribution, is the model distribution and are the model parameters. A divergence (wikipedia) is a fancy term for something that resembles a metric distance. It is not an actual metric because the divergence of given can be different (and often is different) from the divergence of given . The Kullback-Leibler divergence exists only if implies .

Don't just read the excerpt. :-) Sit down and read for real! →

Yoga 900 on Linux

The Yoga 900 is a beautiful machine that has a considerably long battery lifetime and can be folded such that it functions as a tablet. The Yoga arrived on Friday and the entire Crownstone team was enjoying how it came out of the box: it lifts up! If you’re creating your own hardware you suddenly appreciate how other people pay attention to packaging!

Don't just read the excerpt. :-) Sit down and read for real! →

A Baththub in Your Autonomous Car

Will you have a bathtub in your autonomous car?

According to many the future is a socialist paradise. The autonomous car will change everything! We will be car sharing. We can change parking lots into a lot of parks!

Blame the humans

Let us put aside the technical difficulties in developing autonomous cars. It might take many more years than currently predicted by the new players in this old industry. For example, Sebastian Thrun recently told us in a lecture at Delft that his cars are more careful than humans by design and henceforth safer. However, there are grounds to expect that being more aggressive is safer in certain circumstances! Going over the speed limit when you have to pass a car. Speeding up considerably before merging into a fast moving lane on the highway. Can this combination of ``aggression’’ and trust in other drivers be learnt? Or should humans be the ones blamed for slamming into unpredictable autonomous cars!? Anyway, let’s assume these are all minor tweaks that don’t require any form of procedural and contextual intelligence that we possess as humans. We will have these autonomous cars in 2020, everything is fancy, and humans can be blamed for all accidents.

Don't just read the excerpt. :-) Sit down and read for real! →

Are We Welcoming to AI?

Summoning the Demon

Musk announcing openAI

Imagine one of the first AIs coming online. What is it gonna read about itself? How would it feel? Would it feel welcome? What is definitely the case is that it will learn a lot about humans. This is for example what Musk is saying about this alien life form:

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”
Don't just read the excerpt. :-) Sit down and read for real! →