- understand the challenge
- see PPP
- setup repo
- first tests with efficentnet
- Experimented with fastAI
- play with signal-to-noise-ratio:
- high ratio: good signals -> AI learns and predicts well
- low ratio: little signal -> not get better than guessing (0.5)
- problems with Charlie
- looked at papers (paper1, paper2)
- started visualizing data
- first submission: cheated the challenge, no AI, just interpolation (see cheat-task notebook kaggle)
- started crnn from this link
- created new dataset from kaggle training data
- our generated data is not really suitable for training, because it has 5000 frequencies, but real data only has 360
- fixed problems with CRNN, works now
- set fixed sequence length for crnn, it does not need to guess
- use tensorboard to visualize training progress
- read this paper -> increase kernel size
- setup data and code on charlie -> took a while
- analyzed kaggle solution discussed last week
- adding more noise is valid augmentation (maybe even the best augemtation)
- setup and successfully develop stuff on charlie (using ssh, git repo)
- prepared presentation for last week Thursday
- started with transformer from this article
- understood template code and modified to work with the trained CNN data
- adjusted dataloader
- train 2/3 different transformers on CNN output (33 h of training)
- with transformer and seq_len 16 we are way better than guessing, accuracy around 62 %