The goal of this workshop is to introduce students to the concepts and practice of Bayesian modeling. We will begin by motivating Bayesian approaches. Next, we will introduce and apply models with conjugate priors, illustrating with the Beta-Binomial and Gamma-Poisson models. We will then introduce the two primary techniques for approximate Bayesian inference, namely Markov Chain Monte Carlo (MCMC) and variational inference (VI). Using these techniques, we will handle semi-conjugate models, including Bayesian linear regression, Bayesian mixture models, and Bayesian hidden Markov models. Next, using more advanced VI and MCMC, and in some cases clever trickery, we will then tackle models for which there are not conjugate priors, such as Bayesian logistic regression, Bayesian multiclass regression, and a racially polarized voting model. Finally, we will very briefly discuss Bayesian deep learning. For applications, we will use Python; namely, a combination of pymc3, scikit-learn, and code we write ourselves.
Why have exercises as a part of the course, as opposed to just further lecture?
Why have a lab as a part of the course, as opposed to just working on your own time?
Links to Exercises