Skip to content

Latest commit

 

History

History
17 lines (11 loc) · 1.38 KB

README.md

File metadata and controls

17 lines (11 loc) · 1.38 KB

Sparse Representations by Local Anti-Hebbian Learning

This is an experiment based on Peter Foldiak's work on developing sparse codes. We can take a single layer of neurons (perceptrons) and add anti-Hebbian feedback connections between them that can learn to code patterns in such a way that statistical dependency between the elements of the representation is reduced while preserving information. Even this simple network is shown to learn the independent patterns of the dataset, providing a simple alternative to PCA.

Neurons are initialized randomly and look like so: Before

I then trained on a series of "bar" images, which look like so. Data

Interestingly, it only takes a one-layer perceptron network with 16 neurons and no knowledge of the dataset to recover the dataset. The feedforward connections developed so that the units became detectors of the most common, highly correlated components (lines). All information is preserved and virtually all redundancy is removed by the inhibitory weights, as the outputs are statistically independent. After

In contrast, PCA does not even come close to learning the bars PCA

An interactive notebook can be found HERE