Welcome to the repository for "Master Deep Learning and Generative AI with PyTorch". This repository serves as a structured learning path covering fundamental to advanced deep learning concepts using PyTorch.
This course covers the following topics in-depth:
Master-Deep-Learning-and-Generative-AI-with-PyTorch/
│── README.md
│── torch/ # Basics of PyTorch
│── linear_regression/ # Implementing Linear Regression in PyTorch
│── activation_functions/ # All Activation Functions Implementations
│── loss_and_cost_functions/ # Understanding Loss & Cost Functions
│── optimizers/ # Different Optimization Algorithms
│── projects/ # Hands-on Deep Learning Projects
│── improve_nn_performance/ # Techniques to Improve Neural Networks
│── nlp/ # Natural Language Processing with PyTorch
│── gen_ai_models/ # Generative AI Models (GANs, VAEs, Transformers)
│── transformers/ # Transformer Models (BERT, GPT, etc.)
│── vision_transformer/ # Vision Transformer (ViT) Implementations
- Introduction to PyTorch tensors, operations, and autograd
- Loading datasets with
torchvision
- Building basic neural networks
- Implementing Linear Regression from scratch
- Using PyTorch's
nn.Linear
for regression models - Loss functions and optimization for regression tasks
- Implementing and understanding activation functions:
- Sigmoid, Tanh, ReLU, Leaky ReLU, Softmax, GELU, SiLU, etc.
- Properties and characteristics of activation functions
- Choosing the right activation function for your model
- Understanding Mean Squared Error (MSE), Cross-Entropy, Huber Loss, etc.
- Implementing loss functions in PyTorch
- Visualizing loss functions
- Implementing Gradient Descent, Adam, RMSProp, AdaGrad, etc.
- Comparison of different optimization techniques
- Learning rate scheduling and fine-tuning
- Implementing end-to-end deep learning projects
- Classification, regression, and real-world applications
- Computer vision and NLP-based projects
- Weight initialization techniques (Xavier, He, etc.)
- Batch Normalization, Dropout, Regularization techniques
- Gradient clipping, learning rate scheduling, early stopping
- Text preprocessing: Tokenization, Stemming, Lemmatization
- Text vectorization: One-Hot Encoding, TF-IDF, Word2Vec
- Implementing NLP models using PyTorch
- Implementing Variational Autoencoders (VAEs)
- Building Generative Adversarial Networks (GANs)
- Diffusion models and text-to-image generation
- Attention mechanism and self-attention
- Implementing Transformer models like BERT, GPT, T5
- Pretrained Transformers and Fine-tuning
- Understanding Vision Transformer (ViT)
- Implementing Vision Transformer using PyTorch
- Image classification using Transformer-based models
- Basic programming knowledge (preferably in Python)
- High school-level mathematics (linear algebra, calculus)
- No prior AI/ML experience needed—this course covers everything from scratch!
- AI & Deep Learning Beginners
- Software Developers transitioning to AI/ML
- Data Scientists looking to master PyTorch
If you want the complete guided learning experience, enroll in the course on Udemy:
Master Deep Learning and Generative AI with PyTorch in Hindi