Nonnegative Autoencoders

My intuition would say that a part-based decomposition should arise naturally within an autoencoder. To encorporate the next image in an image recognition task, it must be more beneficial to have gradient descent being able to navigate towards the optimal set of neural network weights for that image. If not, for each image gradient descent is all the time navigating some kind of common denominator, none of the images are actually properly represented. For each new image that is getting better classified, the other images are classified worse. With a proper decomposition learning the next representation will not interfere with previous representations. Grossberg calls this in Adaptive Resonance Theory (ART) catastrophic forgetting.

Maybe if we train a network long enough this will be the emerging decomposition strategy indeed. However, this is not what is normally found. The different representations get coupled and there is not a decomposition that allows the network to explore different feature dimensions independently.

One of the means to obtain a part-based representation is to force positive or zero weights in a network. In the literature nonnegative matrix factorization can be found. Due to the nonnegativity constraint the features are additive. This leads to a sparse basis where through summation “parts” are added up to a “whole” object. For example, faces are built up out of features like eyes, nostrils, mouth, ears, eyebrows, etc.

At Louisville university Ehsan Hosseini-Asl (github), Jacek Zurada (who is running for 2019 IEEE president), and Olfa Nasraoui (twitter) studied how nonnegative constraints can be added to an autoencoder in Deep Learning of Part-based Representation of Data Using Sparse Autoencoders with Nonnegativity Constraints (2016).

An autoencoder which has a latent layer that contains a part-based representation, only has a few of the nodes active at a particular input. In other words, such a representation is sparse.

One of the ways a sparse representation can be enforced is to limit the activation of a hidden unit over all data items $r$. The average activation of a unit is:

To make sure that the activation is limited, we can bound $% $ with $p$ a small value close to zero.

The usual cost function is just the reconstruction error $J_E$. Here, we include the activation limitation by adding an additional term:

We can prevent overfitting by regularization. This can be done by adding noise to the input, dropout, or by penalizing large weights. The latter corresponds to yet another term:

The sizes of adjacent layers are indicated by $s_l$ and $s_{l+1}$ (and we are limited to $l$ layers).

The total cost function used by the authors for the sparse autoencoder contains all the above cost functions, each weighted, one by parameter $\beta$, the other by $\lambda$.

To enforce nonnegativity we can introduce a different regularization term:

For the nonnegative constrained autoencoder the authors suggest:

This term penalizes all negative values. All positive values do not contribute to the cost function.

Results

The results are compared to the Sparse Autoencoder (SA), the Nonnegative Sparse Autoencoder (NSA), and Nonnegative Matrix Factorization (NMF).