This repository contains a curated list of resources related to Generative AI. It includes papers, reports, articles, and timelines that provide insights into the history, maps, definitions, and advancements in the field of Generative AI. Explore the resources below to deepen your understanding of this exciting area.
-
Paper Digest - ChatGPT: Recent Papers on ChatGPT: This paper digest presents recent papers on ChatGPT, a widely known language model.
-
A Survey of Large Language Models: This paper summarizes the evolution of language models, focusing on Large Language Models (LLMs), and discusses their advances, techniques, and impact on AI development and usage.
-
A Comprehensive Survey of AI-Generated Content (AIGC): A History of Generative AI from GAN to ChatGPT: This comprehensive survey provides a historical overview of Generative AI, from Generative Adversarial Networks (GANs) to ChatGPT.
-
Toward General Design Principles for Generative AI Applications: This paper presents a set of seven principles for the design of generative AI applications.
-
A Comprehensive Survey on Pretrained Foundation Models: A History from BERT to ChatGPT: This survey explores the history and development of pretrained foundation models, from BERT to ChatGPT.
-
A Review of Generative AI from Historical Perspectives: This paper by Dipankar Dasgupta, Deepak Venugopal, and Kishor Datta Gupta offers a review of Generative AI from historical perspectives.
-
AI Index Report 2023 β Artificial Intelligence Index: This report is produced by the Human-Centered Artificial Intelligence group from Stanford University. It measures trends in AI and provides valuable insights into the field.
-
The landscape of generative AI landscape reports: This meta report analyzes reports published by nine venture capital firms, offering a comprehensive overview of the generative AI landscape.
-
Base11 Research - generative-ai: This report, produced by the investment firm Base10, provides insights into the field of Generative AI.
-
The Generative AI Area: history, maps, and definitions: This article explores the history, maps, and definitions of Generative AI.
-
Who Owns the Generative AI Platform? | Andreessen Horowitz: This article discusses the generative AI market and presents an interesting technology stack of the area.
-
Generative AI with Cohere: Part 1 - Model Prompting: Cohere AI provides an overview of Generative AI, focusing on model prompting, in this blog post.
-
Generative AI with Cohere: Part 2 - Use Case Ideation: In the second part of their series, Cohere AI presents a list of Generative AI use cases.
-
Large Language Models and Where to Use Them: Part 1: Cohere AI shares a list of use cases for Large Language Models (LLMs) in this blog post.
-
Large Language Models and Where to Use Them: Part 2: Cohere AI continues their exploration of LLM use cases in this follow-up post.
-
What's the big deal with Generative AI? Is it the future or the present?: Cohere AI provides a summarization of the area of Generative AI, discussing its future and present impact.
-
Engines of Wow: AI Art Comes of Age β Steve Murch: This article delves into the emergence and development of AI
- togethercomputer/OpenChatKit: provides an open-source base to create both specialized and general-purpose chatbots for various applications.
- Paper Digest - ChatGPT: Recent Papers on ChatGPT: collection of recent papers on ChatGPT.
- Let Us Show You How GPT Works β Using Jane Austen - The New York Times: an article explaining how GPT works using Jane Austen's writings.
- Search-in-the-Chain: Towards Accurate, Credible and Traceable Large Language Models for Knowledge-intensive Tasks | arxiv: a novel framework called Search-in-the-Chain (SearChain) to improve the accuracy, credibility, and traceability of LLM-generated content for multi-hop question answering.
- Mooler0410/LLMsPracticalGuide: a list of practical guide resources of LLMs based on the paper "Harnessing the Power of LLMs in Practice: A Survey on ChatGPT and Beyond".
- Free LLMs and tools for different applications, including data manipulation with pandas-ai
- Cerebras-GPT-13B
- JARVIS
- OpenLlama
- pandas-ai
- hpcaitech/ColossalAI: making large AI models cheaper, faster, and more accessible.
- microsoft/LoRA: code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models".
- kyrolabs/awesome-langchain: an awesome list of tools and projects with the awesome LangChain framework.
- Stability AI: the first of its StableLM Suite of Language Models.
- Free Dolly | The Databricks Blog: open-source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.
- Summary of ChatGPT/GPT-4 Research and Perspective Towards the Future of Large Language Models: a comprehensive survey of ChatGPT and GPT-4 and their prospective applications across diverse domains.
- lm-sys/FastChat: the release repo for "Vicuna: An Open Chatbot Impressing GPT-4" [demo].
- oobabooga/text-generation-webui: a gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.
- Why LLaMa Is A Big Deal | Hackaday: a post that discusses the impact of LLaMa and Alpaca in popularizing LLMs and even using them in small hardware devices.
- logspace-ai/langflow: a UI for LangChain, designed with react-flow to provide an effortless way to experiment and prototype flows.
- More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models: a paper on LLM security.
- Cohere AI: a way to integrate state-of-the-art language models into applications.
- Langchain for paper summarization: using LangChain to build an app for paper summarization.
- Autonomous LLM Agents: a collection of autonomous LLM agents.
- Vercel for AI agents: helps developers build, deploy, and monitor AI agents, focusing on specialized AI agents that build software for you - your personal software developers.
- 101dotxyz/GPTeam: GPTeam uses GPT-4 to create multiple agents who collaborate to achieve predefined goals.
- Fine-Tuner.ai: a no-code approach to build AI agents.
- AI Agent Basics: Letβs Think Step By Step: an article that explains the basics of AI agents.
- neuml/txtai: semantic search and workflows powered by language models.
- facebookresearch/faiss: a library for efficient similarity search and clustering of dense vectors.
- Optimize Your Chatbotβs Conversational Intelligence Using GPT-3: a tutorial presenting semantic search concepts.
- GPT-3 playground: playground to experiment with GPT-3 models.
- Fine-tuning GPT-3: a guide on how to customize a model for OpenAI's GPT-3.
- Top 10 GPT-3 Powered Applications to Know in 2022: a list of the top 10 GPT-3 powered applications.
- bigscience/bloom: getting started with BLOOM.
- BLOOM: an open-source 176-billion-parameter model that aims to democratize large language models.
- Plus AI for Google Slides: create AI-powered presentations in Google Slides.
- ChatBotKit: a toolkit to build AI chatbots.
- Boring Report: an app that uses AI to remove sensationalism from the news and makes it boring to read.
- ChatPDF: chat with any PDF! Upload a PDF file and make questions about it for semantic search.
- Character.AI: a platform for creating and talking to advanced AI characters.
- SlidesAI: create presentation slides with AI in minutes.
- Rationale: a decision-making tool powered by the latest GPT and in-context learning.
- DetangleAI: AI-generated summaries of provided legal documents.
- GPT-2 Output Detector: a tool that estimates if a given text is real or generated by GPT.
- HyperWrite: a personal writing assistant with suggestions and sentence completions.
- DeepStory: a platform for co-creation between humans and machines.
- InferKit: a platform for generating text with an API.
-
CopyHat: a platform for generating text using AI models.
-
Lucid Lyrics - AI Assisted Art: AI-assisted lyrical interpretations by Walter Arnold.
-
Authors A.I.: AI-powered text analysis.
-
Rytr: Rytr is an AI writing assistant that helps create content.
-
Charisma: Charisma is a platform for creating interactive stories with believable virtual characters.
-
Riku.AI: the vault for your AI creations.
-
Taskade: an AI outliner and mind map generator for teams with a built-in AI chat.
-
AI Story Generator: a free and fast online AI-powered story generator that writes short stories for you.
Machine Learning
- Caltech CS156: Learning from Data
- Stanford CS229: Machine Learning
- Making Friends with Machine Learning
- Applied Machine Learning
- Introduction to Machine Learning (TΓΌbingen)
- Machine Learning Lecture (Stefan Harmeling)
- Statistical Machine Learning (TΓΌbingen)
- Probabilistic Machine Learning
- MIT 6.S897: Machine Learning for Healthcare (2019)
Deep Learning
- Neural Networks: Zero to Hero
- MIT: Deep Learning for Art, Aesthetics, and Creativity
- Stanford CS230: Deep Learning (2018)
- Introduction to Deep Learning (MIT)
- CMU Introduction to Deep Learning (11-785)
- Deep Learning: CS 182
- Deep Unsupervised Learning
- NYU Deep Learning SP21
- Foundation Models
- Deep Learning (TΓΌbingen)
Scientific Machine Learning
Practical Machine Learning
- Evaluating and Debugging Generative AI
- ChatGPT Prompt Engineering for Developers
- LangChain for LLM Application Development
- LangChain: Chat with Your Data
- Building Systems with the ChatGPT API
- LangChain & Vector Databases in Production
- Building LLM-Powered Apps
- Full Stack LLM Bootcamp
- Full Stack Deep Learning
- Practical Deep Learning for Coders
- Stanford MLSys Seminars
- Machine Learning Engineering for Production (MLOps)
- MIT Introduction to Data-Centric AI
Natural Language Processing
- XCS224U: Natural Language Understanding (2023)
- Stanford CS25 - Transformers United
- NLP Course (Hugging Face)
- CS224N: Natural Language Processing with Deep Learning
- CMU Neural Networks for NLP
- CS224U: Natural Language Understanding
- CMU Advanced NLP 2021/2022
- Multilingual NLP
- Advanced NLP
Computer Vision
- CS231N: Convolutional Neural Networks for Visual Recognition
- Deep Learning for Computer Vision
- Deep Learning for Computer Vision (DL4CV)
- Deep Learning for Computer Vision (neuralearn.ai)
Reinforcement Learning
- Deep Reinforcement Learning
- Reinforcement Learning Lecture Series (DeepMind)
- Reinforcement Learning (Polytechnique Montreal, Fall 2021)
- Foundations of Deep RL
- Stanford CS234: Reinforcement Learning
Graph Machine Learning
Multi-Task Learning
Others
An introductory course in machine learning that covers the basic theory, algorithms, and applications.
- Lecture 1: The Learning Problem
- Lecture 2: Is Learning Feasible?
- Lecture 3: The Linear Model I
- Lecture 4: Error and Noise
- Lecture 5: Training versus Testing
- Lecture 6: Theory of Generalization
- Lecture 7: The VC Dimension
- Lecture 8: Bias-Variance Tradeoff
- Lecture 9: The Linear Model II
- Lecture 10: Neural Networks
- Lecture 11: Overfitting
- Lecture 12: Regularization
- Lecture 13: Validation
- Lecture 14: Support Vector Machines
- Lecture 15: Kernel Methods
- Lecture 16: Radial Basis Functions
- Lecture 17: Three Learning Principles
- Lecture 18: Epilogue
π Link to Course
To learn some of the basics of ML:
- Linear Regression and Gradient Descent
- Logistic Regression
- Naive Bayes
- SVMs
- Kernels
- Decision Trees
- Introduction to Neural Networks
- Debugging ML Models ...
π Link to Course
A series of mini lectures covering various introductory topics in ML:
- Explainability in AI
- Classification vs. Regression
- Precession vs. Recall
- Statistical Significance
- Clustering and K-means
- Ensemble models ...
π Link to Course
Course providing an in-depth overview of neural networks.
- Backpropagation
- Spelled-out intro to Language Modeling
- Activation and Gradients
- Becoming a Backprop Ninja
π Link to Course
Covers the application of deep learning for art, aesthetics, and creativity.
- Nostalgia -> Art -> Creativity -> Evolution as Data + Direction
- Efficient GANs
- Explorations in AI for Creativity
- Neural Abstractions
- Easy 3D Content Creation with Consistent Neural Fields ...
π Link to Course
Covers the foundations of deep learning, how to build different neural networks(CNNs, RNNs, LSTMs, etc...), how to lead machine learning projects, and career advice for deep learning practitioners.
- Deep Learning Intuition
- Adversarial examples - GANs
- Full-cycle of a Deep Learning Project
- AI and Healthcare
- Deep Learning Strategy
- Interpretability of Neural Networks
- Career Advice and Reading Research Papers
- Deep Reinforcement Learning
π Link to Course π Link to Materials
To learn some of the most widely used techniques in ML:
- Optimization and Calculus
- Overfitting and Underfitting
- Regularization
- Monte Carlo Estimation
- Maximum Likelihood Learning
- Nearest Neighbours
- ...
π Link to Course
The course serves as a basic introduction to machine learning and covers key concepts in regression, classification, optimization, regularization, clustering, and dimensionality reduction.
- Linear regression
- Logistic regression
- Regularization
- Boosting
- Neural networks
- PCA
- Clustering
- ...
π Link to Course
Covers many fundamental ML concepts:
- Bayes rule
- From logic to probabilities
- Distributions
- Matrix Differential Calculus
- PCA
- K-means and EM
- Causality
- Gaussian Processes
- ...
π Link to Course
The course covers the standard paradigms and algorithms in statistical machine learning.
- KNN
- Bayesian decision theory
- Convex optimization
- Linear and ridge regression
- Logistic regression
- SVM
- Random Forests
- Boosting
- PCA
- Clustering
- ...
π Link to Course
This course covers topics such as how to:
- Build and train deep learning models for computer vision, natural language processing, tabular analysis, and collaborative filtering problems
- Create random forests and regression models
- Deploy models
- Use PyTorch, the worldβs fastest growing deep learning software, plus popular libraries like fastai and Hugging Face
- Foundations and Deep Dive to Diffusion Models
- ...
A seminar series on all sorts of topics related to building machine learning systems.
π Link to Lectures
Specialization course on MLOPs by Andrew Ng.
π Link to Lectures
Covers the emerging science of Data-Centric AI (DCAI) that studies techniques to improve datasets, which is often the best way to improve performance in practical ML applications. Topics include:
- Data-Centric AI vs. Model-Centric AI
- Label Errors
- Dataset Creation and Curation
- Data-centric Evaluation of ML Models
- Class Imbalance, Outliers, and Distribution Shift
- ...
π Course Website
π Lecture Videos
π Lab Assignments
To learn some of the latest graph techniques in machine learning:
- PageRank
- Matrix Factorizing
- Node Embeddings
- Graph Neural Networks
- Knowledge Graphs
- Deep Generative Models for Graphs
- ...
π Link to Course
To learn the probabilistic paradigm of ML:
- Reasoning about uncertainty
- Continuous Variables
- Sampling
- Markov Chain Monte Carlo
- Gaussian Distributions
- Graphical Models
- Tuning Inference Algorithms
- ...
π Link to Course
This course introduces students to machine learning in healthcare, including the nature of clinical data and the use of machine learning for risk stratification, disease progression modeling, precision medicine, diagnosis, subtype discovery, and improving clinical workflows.
π Link to Course
To learn some of the fundamentals of deep learning:
- Introduction to Deep Learning
π Link to Course
The course starts off gradually from MLPs (Multi Layer Perceptrons) and then progresses into concepts like attention and sequence-to-sequence models.
π Link to Course
π Lectures
π Tutorials/Recitations
To learn some of the widely used techniques in deep learning:
- Machine Learning Basics
- Error Analysis
- Optimization
- Backpropagation
- Initialization
- Batch Normalization
- Style transfer
- Imitation Learning
- ...
π Link to Course
To learn the latest and most widely used techniques in deep unsupervised learning:
- Autoregressive Models
- Flow Models
- Latent Variable Models
- Self-supervised learning
- Implicit Models
- Compression
- ...
π Link to Course
To learn some of the advanced techniques in deep learning:
- Neural Nets: rotation and squashing
- Latent Variable Energy Based Models
- Unsupervised Learning
- Generative Adversarial Networks
- Autoencoders
- ...
π Link to Course
To learn about foundation models like GPT-3, CLIP, Flamingo, Codex, and DINO.
π Link to Course
This course introduces the practical and theoretical principles of deep neural networks.
- Computation graphs
- Activation functions and loss functions
- Training, regularization and data augmentation
- Basic and state-of-the-art deep neural network architectures including convolutional networks and graph neural networks
- Deep generative models such as auto-encoders, variational auto-encoders and generative adversarial networks
- ...
π Link to Course
- The Basics of Scientific Simulators
- Introduction to Parallel Computing
- Continuous Dynamics
- Inverse Problems and Differentiable Programming
- Distributed Parallel Computing
- Physics-Informed Neural Networks and Neural Differential Equations
- Probabilistic Programming, AKA Bayesian Estimation on Programs
- Globalizing the Understanding of Models
π Link to Course
This course covers topics such as:
- Contextual Word Representations
- Information Retrieval
- In-context learning
- Behavioral Evaluation of NLU models
- NLP Methods and Metrics
- ...
π Link to Course
This course consists of lectures focused on Transformers, providing a deep dive and their applications
- Introduction to Transformers
- Transformers in Language: GPT-3, Codex
- Applications in Vision
- Transformers in RL & Universal Compute Engines
- Scaling transformers
- Interpretability with transformers
- ...
π Link to Course
Learn about different NLP concepts and how to apply language models and Transformers to NLP:
- What is Transfer Learning?
- BPE Tokenization
- Batching inputs
- Fine-tuning models
- Text embeddings and semantic search
- Model evaluation
- ...
π Link to Course
To learn the latest approaches for deep learning based NLP:
- Dependency parsing
- Language models and RNNs
- Question Answering
- Transformers and pretraining
- Natural Language Generation
- T5 and Large Language Models
- Future of NLP
- ...
π Link to Course
To learn the latest neural network based techniques for NLP:
- Language Modeling
- Efficiency tricks
- Conditioned Generation
- Structured Prediction
- Model Interpretation
- Advanced Search Algorithms
- ...
π Link to Course
To learn the latest concepts in natural language understanding:
- Grounded Language Understanding
- Relation Extraction
- Natural Language Inference (NLI)
- NLU and Neural Information Extraction
- Adversarial testing
- ...
π Link to Course
To learn:
- Basics of modern NLP techniques
- Multi-task, Multi-domain, multi-lingual learning
- Prompting + Sequence-to-sequence pre-training
- Interpreting and Debugging NLP Models
- Learning from Knowledge-bases
- Adversarial learning
- ...
π Link to 2021 Edition
π Link to 2022 Edition
To learn the latest concepts for doing multilingual NLP:
- Typology
- Words, Part of Speech, and Morphology
- Advanced Text Classification
- Machine Translation
- Data Augmentation for MT
- Low Resource ASR
- Active Learning
- ...
π Link to 2020 Course
π Link to 2022 Course
To learn advanced concepts in NLP:
- Attention Mechanisms
- Transformers
- BERT
- Question Answering
- Model Distillation
- Vision + Language
- Ethics in NLP
- Commonsense Reasoning
- ...
π Link to Course
Stanford's Famous CS231n course. The videos are only available for the Spring 2017 semester. The course is currently known as Deep Learning for Computer Vision, but the Spring 2017 version is titled Convolutional Neural Networks for Visual Recognition.
- Image Classification
- Loss Functions and Optimization
- Introduction to Neural Networks
- Convolutional Neural Networks
- Training Neural Networks
- Deep Learning Software
- CNN Architectures
- Recurrent Neural Networks
- Detection and Segmentation
- Visualizing and Understanding
- Generative Models
- Deep Reinforcement Learning
π Link to Course π Link to Materials
To learn some of the fundamental concepts in CV:
- Introduction to deep learning for CV
- Image Classification
- Convolutional Networks
- Attention Networks
- Detection and Segmentation
- Generative Models
π Link to Course
To learn modern methods for computer vision:
- CNNs
- Advanced PyTorch
- Understanding Neural Networks
- RNN, Attention and ViTs
- Generative Models
- GPU Fundamentals
- Self-Supervision
- Neural Rendering
- Efficient Architectures
π Link to Course
To learn modern methods for computer vision:
- Self-Supervised Learning
- Neural Rendering
- Efficient Architectures
- Machine Learning Operations (MLOps)
- Modern Convolutional Neural Networks
- Transformers in Vision
- Model Deployment
π Link to Course
To learn about concepts in geometric deep learning:
- Learning in High Dimensions
- Geometric Priors
- Grids
- Manifolds and Meshes
- Sequences and Time Warping
- ...
π Link to Course
To learn the latest concepts in deep RL:
- Intro to RL
- RL algorithms
- Real-world sequential decision making
- Supervised learning of behaviors
- Deep imitation learning
- Cost functions and reward functions
- ...
π Link to Course
The Deep Learning Lecture Series is a collaboration between DeepMind and the UCL Centre for Artificial Intelligence.
- Introduction to RL
- Dynamic Programming
- Model-free algorithms
- Deep reinforcement learning
- ...
π Link to Course
You'll learn:
- Instrument A Jupyter Notebook
- Manage Hyperparameters Config
- Log Run Metrics
- Collect artifacts for dataset and model versioning
- Log experiment results
- Trace prompts and responses for LLMs
- ...
π Link to Course
Learn how to use a large language model (LLM) to quickly build new and powerful applications.
π Link to Course
You'll learn:
- Models, Prompt, and Parsers
- Memories for LLMs
- Chains
- Question Answering over Documents
- Agents
π Link to Course
You'll learn about:
- Document Loading
- Document Splitting
- Vector Stores and Embeddings
- Retrieval
- Question Answering
- Chat
π Link to Course
Learn how to automate complex workflows using chain calls to a large language model.
π Link to Course
Learn how to use LangChain and Vector DBs in Production:
- LLMs and LangChain
- Learning how to Prompt
- Keeping Knowledge Organized with Indexes
- Combining Components Together with Chains
- ...
π Link to Course
Learn how to build LLM-powered applications using LLM APIs
- Unpacking LLM APIs
- Building a Baseline LLM Application
- Enhancing and Optimizing LLM Applications
- ...
π Link to Course
To learn how to build and deploy LLM-powered applications:
- Learn to Spell: Prompt Engineering
- LLMOPs
- UX for Language User Interfaces
- Augmented Language Models
- Launch an LLM App in One Hour
- LLM Foundations
- Project Walkthrough: askFSDL
- ...
π Link to Course
To learn full-stack production deep learning:
- ML Projects
- Infrastructure and Tooling
- Experiment Managing
- Troubleshooting DNNs
- Data Management
- Data Labeling
- Monitoring ML Models
- Web deployment
- ...
π Link to Course
Covers the fundamental concepts of deep learning
- Single-layer neural networks and gradient descent
- Multi-layer neural networks and backpropagation
- Convolutional neural networks for images
- Recurrent neural networks for text
- Autoencoders, variational autoencoders, and generative adversarial networks
- Encoder-decoder recurrent neural networks and transformers
- PyTorch code examples
π Link to Course π Link to Materials
Covers the most dominant paradigms of self-driving cars: modular pipeline-based approaches as well as deep-learning based end-to-end driving techniques.
- Camera, lidar and radar-based perception
- Localization, navigation, path planning
- Vehicle modeling/control
- Deep Learning
- Imitation learning
- Reinforcement learning
π Link to Course
Designing autonomous decision making systems is one of the longstanding goals of Artificial Intelligence. Such decision making systems, if realized, can have a big impact in machine learning for robotics, game playing, control, health care to name a few. This course introduces Reinforcement Learning as a general framework to design such autonomous decision making systems.
- Introduction to RL
- Multi-armed bandits
- Policy Gradient Methods
- Contextual Bandits
- Finite Markov Decision Process
- Dynamic Programming
- Policy Iteration, Value Iteration
- Monte Carlo Methods
- ...
π Link to Course π Link to Materials
A mini 6-lecture series by Pieter Abbeel.
- MDPs, Exact Solution Methods, Max-ent RL
- Deep Q-Learning
- Policy Gradients and Advantage Estimation
- TRPO and PPO
- DDPG and SAC
- Model-based RL
π Link to Course
Covers topics from basic concepts of Reinforcement Learning to more advanced ones:
- Markov decision processes & planning
- Model-free policy evaluation
- Model-free control
- Reinforcement learning with function approximation & Deep RL
- Policy Search
- Exploration
- ...
π Link to Course π Link to Materials
This is a graduate-level course covering different aspects of deep multi-task and meta learning.
- Multi-task learning, transfer learning basics
- Meta-learning algorithms
- Advanced meta-learning topics
- Multi-task RL, goal-conditioned RL
- Meta-reinforcement learning
- Hierarchical RL
- Lifelong learning
- Open problems
π Link to Course π Link to Materials
A course introducing foundations of ML for applications in genomics and the life sciences more broadly.
- Interpreting ML Models
- DNA Accessibility, Promoters and Enhancers
- Chromatin and gene regulation
- Gene Expression, Splicing
- RNA-seq, Splicing
- Single cell RNA-sequencing
- Dimensionality Reduction, Genetics, and Variation
- Drug Discovery
- Protein Structure Prediction
- Protein Folding
- Imaging and Cancer
- Neuroscience
π Link to Course
π Link to Materials
This is course is from Peter Abbeel and covers a review on reinforcement learning and continues to applications in robotics.
- MDPs: Exact Methods
- Discretization of Continuous State Space MDPs
- Function Approximation / Feature-based Representations
- LQR, iterative LQR / Differential Dynamic Programming
- ...
π Link to Course π Link to Materials
Reach out on Twitter if you have any questions.
If you are interested to contribute, feel free to open a PR with a link to the course. It will take a bit of time, but I have plans to do many things with these individual lectures. We can summarize the lectures, include notes, provide additional reading material, include difficulty of content, etc.
You can now find ML Course notes here.