-
Notifications
You must be signed in to change notification settings - Fork 30
6.438 Roadmap
Roger Grosse edited this page Sep 2, 2013
·
2 revisions
- Nothing specifically for this lecture, but you may want to learn about conditional independence now, since that gets used a lot early on in the course.
- Bayesian networks, or Bayes nets, known in 438-land as directed graphical models
- d-separation, a way of analyzing conditional independence structure in Bayes nets
- Bayes Ball, an efficient algorithm for computing Bayes net conditional independencies. Note that while the course uses Bayes Ball to find conditional independencies, you may find it more intuitive to think directly in terms of the d-separation rules, as in the previous item.
- Markov random fields (MRFs), also known as undirected graphical models
- factor graphs. Note that factor graphs and undirected graphical models are two different ways to represent the structure of Boltzmann distributions, and the only real difference is that factor graphs are a more fine-grained notation.
- converting between graphical models
- Nothing to go with this lecture, sorry.
- multivariate Gaussian distribution
- information form for multivariate Gaussians
- Gaussian MRFs
- linear-Gaussian models, or Gaussian Bayes nets
- sum-product algorithm. Unfortunately, different sources differ in which version of this algorithm they present. Most of them use the factor graph version, which is covered in a later lecture. Koller and Friedman jump straight to the junction tree (clique tree) version, which is the most general, but it can be a lot to take in all at once. Start with whichever you like, and it should make the other versions easier to understand.
- hidden Markov models
- forward-backward algorithm
- HMM inference as a special case of belief propagation. This one covers MAP inference as well, which doesn't appear until a later lecture.
- See the references for lecture 8, since some of them use factor graphs.
- the max-product algorithm (Note that max-product, max-sum, and min-sum are all basically the same algorithm.)
- the Viterbi algorithm, the special case of max-product applied to HMMs
- HMM inference as a special case of belief propagation
- If you're feeling rusty on linear algebra, now is a good time to brush up since the Gaussian inference lectures will make heavy use of it.
- Gaussian belief propagation
- connection between Gaussian inference and variable elimination
- Note that these nodes have quite a few linear algebra dependencies. You may want to review those before the lecture, so that the derivations will make sense.
- Kalman filter, and derivation
- Kalman smoother
- Viewing Kalman smoothing as a special case of forward-backward
- importance sampling
- particle filter (TODO)