Courses

1. Introduction to Deep Learning

This course introduces fundamentals in deep learning and demonstrates its applications in computer vision. While it covers both deterministic and probabilistic deep models, the course focuses on deterministic deep models, including Deep Neural Networks, Convolutional Neural Networks, Recurrent Neural Networks, Generative Models (Generative Adversarial Networks, the auto-encoders, and stable diffusion), and Deep Reinforcement learning. It will also briefly introduce probabilistic deep models, including Bayesian Neural Networks, Deep Boltzmann Machine, Deep Belief Networks, and Deep Bayesian Networks. The course is self-contained. It starts with an introduction of the background needed for learning deep models, including probability, linear algebra, standard classification and optimization techniques. To demonstrate various deep models, we will apply them to different computer vision tasks.

2. Computer Vision

This course covers core computer vision theories that deal with acquiring, processing and analyzing images in order to reconstruct and understand the 3D scene. It will focus on the mathematical models that map a 3D scene to its 2D images, theories that reconstruct and interpret the 3D scene from their images, and methods for image feature extraction. Topics to be covered include image formation and representation, camera models, projective geometry, camera calibration, pose estimation, 3D reconstruction, motion analysis, structure from motion, target tracking, feature extraction, and object recognition. Besides computer vision, this course will also be useful for students interested in pattern recognition, image processing, robotics, human computer interaction, and medical imaging.

.

3. Intro to Probabilistic Graphical Models

As a marriage between probability theory and graph theory, Probabilistic Graphical Models (PGMs) provide a tool for dealing with two problems that occur throughout applied mathematics and engineering – uncertainty and complexity. Under probabilistic models, data are modeled by a graph, where the nodes represent the random variables in the data and links capture the probabilistic dependencies among the variables. Using a PGM model, we can discover knowledge, predict future events, and infer hidden causes.

This 3-credit level course will introduce theories and applications of both directed and undirected PGMs, including Bayesian Networks, Structure Causal Models, Markov Random Fields, Hidden Markov Models, Dynamic Bayesian Networks, Influence Diagram, and Factor Graph. Theoretically, we will discuss various model learning and inference methods. For learning, the course will cover parameter and structure learning under both complete and incomplete data. For inference, the course will cover exact inference methods, approximated methods (e.g. Loopy belief propagation), the variational methods, and the numerical sampling methods (e.g. MCMC). Application-wise, we will demonstrate the application of PGMs to different fields, including computer vision, deep learning, and causal discovery. Through this course, students will learn and understand the basic theories underlying different graphical models, implement certain important PGM learning and inference techniques, and solve real world problems using PGMs

4.Advanced Topics in Probabilistic Deep Learning

Current deterministic deep learning models tend to overfit, cannot effectively quantify their prediction uncertainty, and involve a complex heuristic training process. Probabilistic deep learning represents the cutting-edge research in deep learning that can effectively address these limitations.

The goal of this course is to introduce probabilistic deep learning and learn about the latest developments in probabilistic deep learning and it’s applications. Specifically, the course will focus on three types of probabilistic deep learning models: probabilistic deep neural networks (PDNNs), Bayesian deep neural networks (BDNNs) and deep probabilistic graphical models (DPGMs). The course will start with an introduction of the theories to the three probabilistic deep learning models, the associated learning and inference methods, and various challenges facing these models. The second part of this class will be devoted to student presentations, whereby students review the top AI/ML conferences to identify the latest developments in the three types of models, and their applications. Each student will select 3 papers and give a detailed discussion of the work. Besides student presentations, we will also invite guest speakers from industry and academics to give talks on the related topics.

5. Pattern Recognition

This course introduces fundamental concepts, theories, and algorithms for pattern recognition and machine learning. Topics to be covered include linear regression, linear classification, support vector machines, dimensionality reduction, clustering, boosting, and probabilistic graphical models.

6. Mathematical Techniques for Computer Vision, Graphics and Robotics

This course is taught by Prof. Chuck Stewart. The goal of this course is to provide an introduction to some of the mathematical background needed to do research in computer vision, computer graphics and robotics.

7. Statistical and Learning Techniques for Computer Vision