CNN, AutoEncoder, Generative Models, Score Matching, StyleGan, Self Supervision.
For all those projects I used pytorch, tensorflow and matplotlib.
Classification using convolutional neural networks.
Working on CIFAR10 dataset, track results using wandb.
Exploring different architectures, understaing importance of non-linearity and cascaded receptive field.
Example of variance-bias tradeoff:
Auto-Encoding (AE) and transfer-learning with the AE over the MNIST digits dataset.
We tested aspects of AE such as dimensionality reduction, Interpolation, Decorrelation.
I used BCELoss as criterion and Adam for the optimizer.
The encoder is Conv layers with sigmoid and FC layer in the end, The decoder has one FC and Deconv layers.
Demonstration of reconstruction success in relation to latent space size:
In this part I trained a generative adversarial model to produce images of a particular distribution.
generator network can follow the DCGAN architecture. Generator: FC with convs. Discrimator: classification network which decide: real or fake image..
Main points: Loss saturation, Model Inversion and Image restoration including denoise and inpainting.