Applied Machine Learning

Instructor information

  • Name: Roozbeh Razavi-Far 

  • Office: CEI 2134

  • Office Hours: Fridays from 14:30 until 16:00


Class and lab information

  • Location: University of Windsor, Education Building 1101

  • Time: Wednesdays from 16:00-18:50   

Course Description:

This course provides an introduction to the theory and practice of machine learning. Machine learning aims to enable computers to learn without being explicitly programmed. In recent years, machine learning has been extensively used in the design of autonomous cars, smart robots, and smart sensors. This course aims to introduce the most effective and practical machine learning techniques. We will start by reviewing the principles of machine learning including the learning problem, supervised learning, unsupervised learning, feature selection, overfitting, the theory of generalization, the VC dimension, evaluating the hypothesis, regularization, validation, and bias-variance trade-off. We will then move on to study the state of advanced strategies for ensemble learning, incremental learning, imbalanced learning, semi-supervised learning, reinforcement learning, and deep learning. We then familiarize ourselves with the most prominent models, such as convolutional neural networks and autoencoders. We will complete the course with state-of-the-art topics. As a student, you may expect to learn the methods, concepts, and strategies required to put machine learning to work in practical applications. You will gain hands-on experience with machine learning algorithms through assignments and course projects. Undergrad statistics and linear algebra backgrounds are expected, and the lectures cover a refresh of the required basic concepts. The students without formal training in data mining and programming will still find the skills they acquire in this course valuable in their studies and careers.

Required Resources:

[1] Learning from Data, Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, ISBN: 9781600490064.

[2] Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville, ISBN: 9780262035613.


The following books are strongly recommended:

[3] Machine Learning, T. Mitchell, MIT Press.

[4] Reinforcement Learning: An Introduction, R.S. Sutton and A.G. Barto.

[5] Pattern Recognition and Machine Learning, by C. M. Bishop.


Course Schedule

The following course schedule is approximate. 

  • Week 01:

    • Teaching subjects: introduction to machine learning, the learning problem, types of learning, perceptron learning algorithm. (pdf*) (pdf)

    • Textbook Chapter or Readings: Ref. [1] Ch. [1], Lecture notes.

  • Week 02:

    • Teaching subjects: a review on probability and linear algebra, linear models, regression and classification.  

    • Textbook Chapter or Readings: Ref. [1] Ch. [3], Lecture notes.

  • Week 03:

    •  Teaching subjects: error measures, capacity, under-fitting, overfitting, multilayer perceptron.

    • Textbook Chapter or Readings: Ref. [1] Ch. [1,4], Lecture notes.

  • Week 04:

    • Teaching subjects: backpropagation, stochastic gradient descent, regression, and classification.

    • Textbook Chapter or Readings: Ref. [1] Ch. [3], Lecture notes.

  • Week 05:

    • Teaching subjects: theory of generalization, approximation capacity, the VC dimension, evaluating the hypothesis, performance measures.

    • Textbook Chapter or Readings: Ref. [1] Ch. [2], Lecture notes.

  • Week 06:

    • Teaching subjects: regularization, hyperparameters, validation, bias-variance trade-off.  

    • Textbook Chapter or Readings: Ref. [1] Ch. [4], Lecture notes.

  • Week 07: Study week

  • Week 08:

    • Teaching subjects: support vector machines, kernel methods, and radial basis functions.

    • Textbook Chapter or Readings: Ref. [1] Ch. [6], e-chapter, Lecture notes.

  • Week 09:

    • Teaching subjects: learning principles, generative models, feature selection.

    • Textbook Chapter or Readings: Ref. [1] Ch. [5], Lecture notes.

  • Week 10:

    • Teaching subjects: ensemble learning, bagging, boosting, a mixture of experts, incremental learning, class imbalanced learning.

    • Textbook Chapter or Readings: Lecture notes.

  • Week 11:

    • Teaching subjects: semi-supervised learning, reinforcement learning, Markov decision process. 

    • Textbook Chapter or Readings: Lecture notes

  • Week 12:

    • Teaching subjects: deep learning, convolutional neural networks, pooling, and convolutional layer models. 

    • Textbook Chapter or Readings: Ref. [2] Ch. [9,10], Lecture notes.

  • Week 13:

    • Teaching subjects: deep learning, recurrent neural networks, autoencoders, deep reinforcement learning. 

    • Textbook Chapter or Readings: Ref. [2] Ch. [14], Lecture Slides.

There will be course projects that involve developing and programming machine learning algorithms using MATLAB /or/ Python /or/ R /or/ FORTRAN /or/ C /or/ C++. The projects will have to be demonstrated during the semester (Primary and Final Demos are mandatory).


Evaluation Methods

The course grade will be evaluated as follows:

  • Participation: 5%

  • Assignments: 10%

  • Exam (closed-book): 40%

  • Final project (group): 45%

Teaching Assistants:

  • Jeremy Feng

  • Wandong Zhang

  • Maryam F. Zanjani


Roozbeh Razavi-Far

PH.D., Lecturer, academic adviser,

Coordinator of Master of Engineering Programs

Faculty of Engineering, University of Windsor






  • Black LinkedIn Icon