Explanations to key concepts in ML
- AlexNet
- BART
- BEiT
- BERT
- Codex
- ColD Fusion
- ConvMixer
- Deep and Cross Network
- DeiT
- DenseNet
- DistilBERT
- DiT
- DocFormer
- Donut
- EfficientNet
- ELMo
- Entity Embeddings
- ERNIE-Layout
- FastBERT
- Fast RCNN
- Faster RCNN
- Feature Pyramid Network
- Feature Tokenizer Transformer
- Focal Loss (RetinaNet)
- GPT
- InceptionNet
- InceptionNetV2 and InceptionNetV3
- InceptionNetV4 and InceptionResNet
- LAMBERT
- Layout LM
- Layout LM v2
- Layout LM v3
- Lenet
- LiLT
- Longformer
- Mask RCNN
- Masked Autoencoder
- MobileBERT
- MobileNetV1
- MobileNetV2
- MobileNetV3
- MobileViT
- RCNN
- ResNet
- ResNext
- SentenceBERT
- Single Shot MultiBox Detector (SSD)
- StructuralLM
- Swin Transformer
- T5
- TableNet
- TabTransformer
- Tabular ResNet
- TinyBERT
- Transformer
- TransformerXL
- UDOP
- VGG
- Vision Transformer
- Wide and Deep Learning
- Xception
- XLNet
- Lenet
- AlexNet
- VGG
- InceptionNet
- InceptionNetV2 and InceptionNetV3
- ResNet
- InceptionNetV4 and InceptionResNet
- ResNext
- Xception
- DenseNet
- MobileNetV1
- MobileNetV2
- MobileNetV3
- EfficientNet
- BART
- BERT
- Codex
- DistilBERT
- FastBERT
- GPT
- Longformer
- MobileBERT
- SentenceBERT
- T5
- TinyBERT
- Transformer
- TransformerXL
- XLNet
- Entity Embeddings
- Tabular ResNet
- Wide and Deep Learning
- Deep and Cross Network
- TabTransformer
- Feature Tokenizer Transformer
- Convolutional Neural Networks
- Layout Transformers
- Region-based Convolutional Neural Networks
- Tabular Deep Learning
Reach out to Ritvik or Elvis if you have any questions.
If you are interested to contribute, feel free to open a PR.