Haku

Reducing redundancy in the bottleneck representation of autoencoders

QR-koodi
Finna-arvio

Reducing redundancy in the bottleneck representation of autoencoders

Highlights

• We propose a scheme to avoid redundant features in the bottleneck representation of autoencoders.

• We explicitly penalize the pair-wise correlations between the features and learn a diverse compressed embedding.

• The proposed penalty acts as an unsupervised regularizer and can be integrated into any autoencoder model.

Abstract

Autoencoders (AEs) are a type of unsupervised neural networks, which can be used to solve various tasks, e.g., dimensionality reduction, image compression, and image denoising. An AE has two goals: (i) compress the original input to a low-dimensional space at the bottleneck of the network topology using an encoder, (ii) reconstruct the input from the representation at the bottleneck using a decoder. Both encoder and decoder are optimized jointly by minimizing a distortion-based loss which implicitly forces the model to keep only the information in input data required to reconstruct them and to reduce redundancies. In this paper, we propose a scheme to explicitly penalize feature redundancies in the bottleneck representation. To this end, we propose an additional loss term, based on the pairwise covariances of the network units, which complements the data reconstruction loss forcing the encoder to learn a more diverse and richer representation of the input. We tested our approach across different tasks, namely dimensionality reduction, image compression, and image denoising. Experimental results show that the proposed loss leads consistently to superior performance compared to using the standard AE loss.

Tallennettuna: