Skip to main content
CPE
436
Machine Learning
This course is designed for senior in Computer Engineering major to introduce them to the fundamentals and theories of machine learning algorithms. Students will be taught the theory, design and implementation of different machine learning algorithms such as Bayes classifiers, decision trees, neural networks, DNN, evolutionary algorithms, inductive learning. Students will implement and compare different algorithms for learning problems. The project is an integral component of this course.
Prerequisites:
0600304,0612207
0612436
(3-0-3)

Credits and Contact Hours

3 credits, 43 hours

Course Instructor Name

Dr. Mohammad Allaho, Dr. Abdullah Alshaibani

Textbook

Machine Learning: A Probabilistic Perspective, Kevin Murphy.

Elements of Statistical Learning, Trevor Hastie, Robert Tibshirani and Jerome Friedman.

A Course in Machine Learning, Hal Daumé III. ( http://ciml.info/ )

References Pattern Recognition and Machine Learning", C.M.Bishop, Springer, 2006

Pattern Classification", Duda, Hart and Stork, Second Edition, Wiley, 2001

Catalog Description

This course provides an introduction to machine learning with a special focus on engineering applications. The course starts with a mathematical background required for machine learning and covers approaches for supervised learning (linear models, kernel methods, decision trees, neural networks), unsupervised learning (clustering, dimensionality reduction), and reinforcement learning as well as theoretical foundations of machine learning (learning theory, optimization). The project is an integral component of this course.

Prerequisite

ENGR-304, CpE-207

Specific Goals for the Course

Upon successful completion of this course, students will be able to:

Know how to use general-to-specific ordering of hypothesis.

Construct a decision tree to learn a concept in real life problem.

Use different information gain functions to select most appropriate decision based on information given from problem.

Design, implement, execute and obtain results of a simple back-propagation neural network for a two-class classification problem. (Student outcomes: 1, 6)

Use a naïve Bayes classifier in learning problems.

Use lazy learners such as k-nearest neighbor algorithm, radial base function in learning problems. (Student outcomes: 1)

Identify the type of ML problem (type of dataset and the required output) and use a proper ML model with proper optimization skills. (Student outcomes: 1, 6)

Familiar with tools and libraries used to build ML models.

Topics to Be Covered

Introduction (what is ML, importance of ML, types of ML from different perspectives)

Math review (Linear algebra, probability theory, numerical computation, decision theory, information theory)

Linear regression, non-linear basis functions

Overfitting, Bias/variance Trade-off, Evaluation

Naïve Bayes

Logistic regression

Multi-class classification

Support Vector Machine (SVM), Kernel methods

Nearest Neighbors

Decision Trees

Ensemble methods, Boosting, Random forests

Neural Networks (perceptron, back propagation concept)

Intro to CNN, RNN, and deep learning

Clustering (k-means, GMM)

Dimensionality reduction (PCA, Independent Component Analysis, and LDA)

Feature Selection, Genetic Algorithms

Reinforcement Learning (online learning concept, the learning task, Q-learning, ...)

Ethics of AI