top of page

EECS 545: Machine Learning

Class overview:

The goal of machine learning is to develop computer algorithms that can learn from data or past experience to predict well on the new unseen data. In the past few decades, machine learning has become a powerful tool in artificial intelligence and data mining, and it has made major impacts in many real-world applications.

This course will give a graduate-level introduction of machine learning and provide foundations of machine learning, mathematical derivation and implementation of the algorithms, and their applications. Topics include supervised learning, unsupervised learning, learning theory, graphical models, and reinforcement learning. This course will also cover recent research topics such as sparsity and feature selection, Bayesian techniques, and deep learning. In addition to mathematical foundations, this course will also put an emphasis on practical applications of machine learning to artificial intelligence and data mining, such as computer vision, data mining, speech recognition, text processing, bioinformatics, and robot perception and control. The course will require an open-ended research project.

​

Syllabus:

  • Introduction (1 lecture)

    • Overview

    • Probability review

    • Loss function

    • Maximum likelihood

    • MAP

  • Regression (2 lectures)

    • Linear regression

    • Gradient descent and stochastic gradient

    • Newton method

    • Probabilistic interpretation of linear regression: Maximum likelihood

  • Classification (2 lectures)

    • k-nearst neighbors (kNN)

    • Naive Bayes

    • Linear discriminant analysis/ Gaussian discriminant analysis

    • Logistic regression

    • Generalized linear models, softmax regression

  • Kernel methods (4 lectures)

    • Kernel density estimation, kernel regression

    • Support vector machines

    • Convex optimization

    • Gaussian processes

  • Regularization (2 lectures)

    • L2 regularization

    • L1 regularization, sparsity and feature selection

    • Bias-Variance tradeoff

    • Overfitting

    • Cross validation, model selection

    • Advice for developing machine learning algorithms

  • Neural networks (1 lecture)

    • Perceptron

    • MLP and back-propagation

  • Learning theory (2 lectures)

    • Sample complexity

    • VC dimension

    • PAC learning

    • Error bounds

  • Graphical models (4 lectures)

    • Bayesian networks

      • Representation

      • Exact inference

      • Sampling based inference

    • Learning in Bayesian networks

      • Maximum likelihood estimation

      • Expectation maximization

      • Hidden Markov Models (HMM)

    • Structure learning

    • Bayesian inference and learning

    • Markov networks

      • Inference and learning

  • Unsupervised learning (4 lectures)

    • Clustering: K-means

    • Gaussian mixtures

    • Expectation Maximization (revisited)

    • PCA

    • Dimensionality reduction: ISOMAP, LLE

    • ICA

    • Sparse coding

    • Boltzmann machines and autoencoders, Deep belief networks

  • Reinforcement learning (3 lectures)

    • MDP

    • Value iteration and policy iteration

    • Dynamic programming

    • Value function approximation

    • TD learning

​

​

bottom of page