Skip to content

Commit ab856a6

Browse files
Update initialization info
1 parent 6169638 commit ab856a6

File tree

2 files changed

+24
-5
lines changed

2 files changed

+24
-5
lines changed

report/bibliography.bib

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,21 @@
1+
@article{Kaiming2015,
2+
author = {Kaiming He and
3+
Xiangyu Zhang and
4+
Shaoqing Ren and
5+
Jian Sun},
6+
title = {Delving Deep into Rectifiers: Surpassing Human-Level Performance on
7+
ImageNet Classification},
8+
journal = {CoRR},
9+
volume = {abs/1502.01852},
10+
year = {2015},
11+
url = {http://arxiv.org/abs/1502.01852},
12+
archivePrefix = {arXiv},
13+
eprint = {1502.01852},
14+
timestamp = {Mon, 13 Aug 2018 16:47:36 +0200},
15+
biburl = {https://dblp.org/rec/bib/journals/corr/HeZR015},
16+
bibsource = {dblp computer science bibliography, https://dblp.org}
17+
}
18+
119
@incollection{Bengio+chapter2007,
220
author = {Bengio, Yoshua and LeCun, Yann},
321
booktitle = {Large Scale Kernel Machines},

report/content.tex

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,4 @@
1+
12
\section{Introduction}
23

34
The paper \supercite{tonolini2019variational} proposes an improvement over the Variational Auto-Encoder (VAE) architecture \supercite{Rezende2014, Kingma2013} by explicitly modelling sparsity in the latent space with a Spike and Slab prior distribution and drawing ideas from sparse coding theory. The main motivation behind their work lies in the ability to infer truly sparse representations from generally intractable non-linear probabilistic models, simultaneously addressing the problem of lack of interpretability of latent features. Moreover, the proposed model improves the classification accuracy using the low-dimensional representations obtained, and significantly adds robustness while varying the dimensionality of latent space. \\
@@ -10,7 +11,7 @@ \section{Related Work}
1011

1112
Variational Auto-Encoders have been extensively studied \supercite{Doersch2016} and widely modified in the recent years in order to encourage certain behavior of the latent space variables \supercite{Nalisnick2016, rolfe2016discrete, casale2018gaussian} or to be further applied for particular tasks \supercite{chen2016variational,walker2016uncertain, kusner2017grammar, jin2018junction}. Regarding the sparsity of the latent space for VAEs, previous work in the literature has focused on either explicitly incorporating a regularization term to benefit sparsity \supercite{louizos2017learning}, or fixing a prior distribution, such as Rectified Gaussians by \supercite{salimans2016structured}, discrete distributions by \supercite{van2017neural}, student-t distribution for Variational Information Bottleneck by \supercite{Chalk2016} and Stick Breaking Processes by \supercite{Nalisnick2016}. \\
1213

13-
Nonetheless, previous works have not allowed to explicitly model sparsity by incorporating linear sparse coding to non-linear probabilistic generative models. The paper we aim to reproduce, offers a connection between both areas through the Spike-and-Slab distribution, chosen as a prior distribution for the latent variables. Although this distribution has been commonly used for modeling sparsity \supercite{Goodfellow2012}, it has rarely been applied to generative models. Moreover, since sparse coding imposes efficient data representations \supercite{ishwaran2005spike, titsias2011spike, bengio2013representation}, the authors demonstrate qualitatively how the sparse learned representations can capture subjectively understandable sources of variation. \\
14+
Nonetheless, previous works have not allowed to explicitly model sparsity by incorporating linear sparse coding to non-linear probabilistic generative models. The paper we aim to reproduce offers a connection between both areas through the Spike-and-Slab distribution, chosen as a prior distribution for the latent variables. Although this distribution has been commonly used for modeling sparsity \supercite{Goodfellow2012}, it has rarely been applied to generative models. Moreover, since sparse coding imposes efficient data representations \supercite{ishwaran2005spike, titsias2011spike, bengio2013representation}, the authors demonstrate qualitatively how the sparse learned representations can capture subjectively understandable sources of variation. \\
1415

1516
Following the line of latent features interpretability, we can observe that the authors' idea is closely related to the Epitomic VAE by \supercite{yeung2016epitomic}, which learns the latent dimensions the recognition function should exploit. Many recent approaches, mostly related to disentangled representations, such as $\beta$-VAE \supercite{higgins2016beta, Burgess2018} or Factor-VAE by \supercite{Kim2018}, focus on learning interpretable factorized representations of the independent data generative factors via generative models. However, these approaches although explicitly induce interpretation of the latent features, do not directly produce sparse representations in contrast with the VSC model. Hence, the authors' aim is to develop a model that directly induces sparsity in a continuous latent space, which in addition, results into a higher expectation of interpretability in large latent spaces. \\
1617

@@ -42,7 +43,7 @@ \subsection{Datasets}
4243

4344
We test the VSC model in two commonly used image datasets: MNIST \supercite{lecun1998gradient} and Fashion-MNIST \supercite{xiao2017fashion}, both composed of $28 \times 28$ grey scale images of handwritten digits and pieces of clothing respectively. Following the paper description, we run most of the experiments with these datasets. In addition to this, CelebA faces \supercite{liu2015deep} dataset was used to showcase qualitative results. We include in our repository routines to download and preprocess the datasets.
4445

45-
\textbf{Observations}
46+
\subsubsection{Observations}
4647

4748
\begin{itemize}
4849
\item For the CelebA dataset, we used a subset of 100K samples for training and 20K samples for testing, which were center cropped and downsampled to a size of $32 \times 32$ using all $3$ RGB channels, as described in the paper.
@@ -59,11 +60,11 @@ \subsection{Implementation Details}
5960

6061
We stored all the checkpoints for the trained models, for reproducibility purposes, together with the training logs which can be visualized using TensorBoard.
6162

62-
\textsc{Observations}
63+
\subsubsection{Observations}
6364
\begin{itemize}
6465
\item One of the missing details in the paper was the batch size. We assumed it to be $32$ samples per batch, due to our memory restrictions.
6566
\item The original paper suggests using $20,000$ iterations for model training using the ADAM optimizer \supercite{kingma2014adam} with learning rate ranging between $0.001$ and $0.01$. In particular, we implemented the VSC model in a way that the number of epochs is one hyperparameter. Thus, we fixed the number of epochs to be equivalent the number of iterations given by the paper; i.e., for MNIST and Fashion-MNIST we trained the VSC model for $11$ epochs with a batch size of $32$. We fixed a learning rate of $0.001$.
66-
\item A minor downside of the paper is that the weights initialization method was not specified.
67+
\item A minor downside of the paper is that the weights initialization method was not specified. We initialized the weights with uniform random variables using the Kaiming initialization method \supercite{Kaiming2015} for all the layers, which is also the default method for linear layers in PyTorch.
6768
\item For the recognition function, in order to avoid numerical instability, we suggest to either use $\textit{clamp}$ or Sigmoid activation function to avoid spike values of zero (thus ensuring $\gamma_i < 1$).
6869
\end{itemize}
6970

@@ -134,7 +135,7 @@ \subsection{Intepretation of sparse codes}
134135
\end{figure}
135136

136137
\subsection{Visualization / Traversing Latent Space}
137-
We explored how sampling from the latent space distribution can allow us to obtain interpretable variations in the generated images (Figure \ref{fig:traversals}), and also how conditional sampling produces arguably realistic new samples from the same conceptual entity. (Figure \ref{fig:conditional}). The traversal of the latent space is performed varying the latent codes with a high absolute value for a given image, one at a time. We can observe that these latent codes indeed represent interpretable features of the datasets, such as the digits shape in MNIST, the color and shape of the clothes in Fashion-MNIST and the orientation, background color, skin color and hair color in the CelebA results.
138+
We explored how sampling from the latent space distribution can allow us to obtain interpretable variations in the generated images (Figure \ref{fig:traversals}), and also how conditional sampling produces arguably realistic new samples from the same conceptual entity (Figure \ref{fig:conditional}). The traversal of the latent space is performed varying the latent codes with a high absolute value for a given image, one at a time. We can observe that these latent codes indeed represent interpretable features of the datasets, such as the digits shape in MNIST, the color and shape of the clothes in Fashion-MNIST and the orientation, background color, skin color and hair color in the CelebA results.
138139

139140
\begin{figure}[!h]
140141
\captionsetup{justification=centering,margin=0.3cm}

0 commit comments

Comments
 (0)